Socio-Cultural Determinants of Political-Economic Affairs (Collected Essays)

by

W B (Ben) Vosloo

July 2015

Copyright © 2015 The author

All rights reserved

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without prior permission in writing of the author.

The author, who is still gripped by the limitations of writing in longhand, wishes to express his deep appreciation to his wife, Madalein Irene, for taking good care of the typing, collating and editing of all the essays in this collection.

Wollongong, July 2015

About the Author

Ben Vosloo was born in the Empangeni district, Natal, 4 November 1934. After completing his schooling in Vryheid, he went to the University of Pretoria where he majored in political science and economics taking the BA and MA degrees with distinction After serving as a teaching and research assistant, he obtained a Ph.D degree in 1965 at Cornell University, Ithaca, New York.

On his return to South Africa, Dr Vosloo began his long association with the reform process in the fields of constitutional change, educational reform and economic development. He served as Professor of Political Science and Public Administration at the University of Stellenbosch for 15 years. He was inter alia member of two direction-setting Commissions: the Erika Theron Commission concerning constitutional reform and the De Lange Commission on educational reform. He published widely in academic and professional publications in the fields of management science, political science and development issues. He held offices as a founding member of a number of academic and professional associations such as the S A Political Science Association, the S A Institute for Public Administration and the S A Institute of International Affairs. During his academic career, Prof. Vosloo received several meritorious scholarships and academic awards.

Ben Vosloo started his “second” career in 1981 when he was appointed as the founding Managing Director of the newly formed Small Business Development Corporation. He steered the SBDC to its successful track record and its unique position of prominence as a private sector led development institution (1981 to 1995). In recognition of his work, Dr Vosloo was made Marketing Man of the Year (1986), Man of the Year by the Institute of Management Consultants of Southern Africa (1989), given the Emeritus Citation for Business Leaders by the Argus Newspaper Group (1990) and the Personnel Man of the Year by the Institute of Personnel Managers (1990), named as one of the Business Times Top Five Businessmen (1993) and by “Beeld” as one of South Africa’s Top 21 Business Leaders in the past 21 years (1995). He acted as co-author and editor of a trend-setting publication Entrepreneurship and Economic Growth (HSRC Publishers, Pretoria 1994) and was awarded an Honorary Doctorate by the University of Pretoria in December 1995.

In 1996 Ben Vosloo started his “third” career. He initially served as a business consultant on strategic policy matters and later became involved in export marketing in the USA, Canada, Europe and Asia. He obtained permanent resident status in Australia in the category “Distinguished Talents” and eventually became an Australian citizen in 2002. He is now retired and resides in North Wollongong, NSW.

Socio-Cultural Determinants of Political-Economic Affairs (Collected Essays)

by W B (Ben) Vosloo

------

INDEX Chapter Page

Introduction i

1 Key Determinants of Human Affairs 1

2 The Role of Religion 9

3 Footprints of the Abrahamic Religions 30

4 Migration Patterns in the Colonial Era (1400-1940) 45

5 Sub-national Group Formation in International Perspective 59

6 The Rise and Decline of the Anglo-Saxon Model 69

7 The British Economy in the Post-Colonial Era 93

8 The Distinctive Nature of the German Model 108

9 The Colonial Dutch and British Influence in South Africa 127

10 East Asian Modernisation and Confucian Capitalism 160

11 China’s Spectacular Rise as an Emerging Giant 171

12 The Impressive Growth Potential of East Asia 185

i

Introduction

All intelligent persons are keen to understand the world around them. Why do things turn out the way they do? What are the key determinants in human affairs? How do they spill over into political affairs? Social scientists have developed the nature-nurture theory of human behaviour to explain the great variation in the levels of human achievement. The “nature” aspect refers to the talents, abilities, instincts and other inherent qualities belonging to a person by birth. Under “nurture” is normally understood the non-genetic external influences that educate, train, nourish and support individuals after their birth. Their accomplishments in many spheres of life are determined by the interaction of their innate potential with the opportunities coming their way. A person born with the talent of “perfect pitch” coupled with unfailing eye and hand co-ordination and dexterity will not be able to become a famous pianist without access to a piano instrument to practice on coupled with a good teacher to introduce him or her to the performing arts. Similarly, a country’s economic fortunes are determined by the interaction of human action and natural endowments. The peoples of some countries do not know how to exploit or utilise their natural resources. Others are astute users of their opportunities and still others do remarkably well without natural endowments. Successful education outcomes require willing and able learners as well as good teachers and facilities.

Based on systematically collected empirical evidence, the academic disciplines of sociology, psychology and anthropology have unfolded much insight into human behaviour as individuals, as members of society and as members of specific group formations. Together they attempt to explain how and why people act the way they do in association with others. Humans are, as Aristotle noticed, “social animals” – they become human in their association with others. And as an astute American political scientist, Harold Lasswell observed: men act politically as they are socially and culturally. He could have added “psychologically”, “religiously” and “economically”.

Most of today’s nation-states came into being since the middle of the seventeenth century. Intergroup conflict along religious, ethnic, language, racial, class or even regional lines or some combination of these, usually played a crucial role. As a result of the migration of people over many centuries, most countries in today’s world have heterogeneous populations. A homogeneous population like the Japanese people is rather exceptional.

The critical factor in heterogeneity is the degree of overlap in the population cleavages. When the cleavages overlap substantially, the conflict potential also increases substantially. This happens when sub-national groups confront each other as substantial opponents across the board in all spheres of societal life. When this happens, there is little scope for compromise and adjustment. Community life becomes a battlefield driven by race hatreds, class envy, cultural enmity and religious confrontations. There are numerous examples of sub-national cleavages or cultural differentiators that are so intense that they not only undermine the internal cohesion or stability of a state, but tear its components apart in violent conflict as unbending adversaries.

Racial differences, though resting on highly subjective stereotypes, usually create a visible and keen sense of differentiation among people. The key elements that are usually used as a basis for such social differentiation are variation in skin pigmentation, hair colour or consistency and facial characteristics. European Westerners are visibly different in an African, Indian or Chinese environment and vice versa. A sense of racial identity is often enveloped in an elaborate ideological wrapping that drives members of racial groups into opposing camps. Ethnic and language groups normally share a common culture, a

ii common ancestry, a shared history and the kinship bonds of their blood relationships. These groupings may overlap with racial characteristics.

Religion provides a strong foundation for identity and cultural cohesion. History has proven that Islam, Christianity, Judaism, Hinduism, Buddhism or particular sects or rites of these can provide a profound hold upon people’s emotions. Common religion can produce a militant cultural identity which is combined with a profound sense of sacred mission. Coexistence of different religious communities within the same state is particularly difficult where religion regards the sacred and secular realms as inseparable. Throughout history, cleavages internal to major religions such as Christianity and Islam have been of great significance. Protestants and Catholics have been involved in devastating conflicts as have Sunni and Shi’ite Muslims. The Crusades of the Middle Ages are a clear indication that the current “Islamic State” and other militant Islamic movements represent clear and present dangers to the preservation of world peace. Religious conflict is capable of generating an intensity of identification which can override all other issues when it invests conflicts with a heavenly mandate to be pursued as a holy duty. It provides a supernatural imperative and removes the debate from the level of human rationality.

There are numerous examples of socio-cultural factors at work around the world. In post-war Europe the Czech Republic and Slovakia went their separate ways after a marriage between the two communities was tried in the years following the Treaty of Versailles. Spain is desperately hanging on to two constituent communities that are keen to go their separate ways: Catalonians and the Basque people. In Belgium the Flemish and the Walloons co-habit the country like persons saddled with a forced marriage. The English component of the United Kingdom has already been forced to let go of the Irish and are now faced with a Scottish push for separation. The Scandinavians, although sharing a homogeneous Nordic ancestry, a staunch Protestant religion, a Kalmar Union in 1397, have co-existed as separate states for more than five centuries. The Germans only managed to become a national state in 1871 after many centuries of internecine strife between its many historic principalities. The Swiss have managed to bring its three linguistic communities – the German, French and Italian components – under a single political blanket by adopting a finely honed canton system of regional self government.

The Middle East is a quagmire of ill-conceived national boundaries that do not take proper account of patterns of socio-cultural solidarity. After the fall of the Ottoman Empire, the cartographers of the victorious British and French powers carved up the Middle East into inchoate political systems. The Kurds were apportioned to four distinct states – Iraq, Syria, Turkey and Iran – where they were subjected to the suppression of the predominant militant national groups. Both Syria and Lebanon included segments of communities that should not have been forced to live under one political umbrella.

In Africa, the colonial powers looked down upon regionally based kingdoms or communities as “native tribes” that needed to be “detribalised”. The vitality of ethnicity as a basis for solidarity was totally overlooked as the colonial cartographers carved up the continent during the “scramble for Africa”. The boundaries of the African states were not demarcated on the basis of nation-building around a core nationality surrounded by outlying minorities. Many African states are still struggling to find a modus vivendi between their ethnically diverse population groups: in Southern Africa, in the Sudan, in Nigeria, in many parts of West Africa in Libya and in the Horn of Africa.

India is a unique laboratory of diversity in terms of religions, linguistic and caste cleavages. These differentiators have played an important role in the generation of conflict. Religion split the country in

iii

1947 into Hindu and Muslim camps which ultimately led to the partition of the country into India accommodating the Hindu population (with a large remnant of Muslims) and Pakistan and Bangladesh to accommodate the Muslims. Hindi is India’s official language, but there are 14 other officially recognised languages. Moreover, there is even a lack of consensus over what proper Hindi is: the Hindi of the Sanskrit sources, or the Hindi of the marketplaces and villages which integrated an admixture of Urdu. It is claimed that village Hindi is a diverse series of dialects, often mutually unintelligible, which are in some areas replaced by other languages such as Telugu, Rajasthani, Bihari and Punjabi. The caste system involves three main castes – Brahmans, Kayasthas and Bassias – with a fourth category of the varna system made up by the “untouchables”, the Dalits. The actual pattern of the social hierarchy varies greatly from region to region. Economic development and urbanisation have lessened India’s caste divisions. It essentially survived in the marriage market. It is important to note that 150 million of India’s 1.2 billion people are Muslims, which implies a much larger conflict potential than is commonly supposed. Terrorism from aggrieved Muslims tends to draw a violent response from Hindus. Kashmir is India’s only Muslim majority state with an ever-present potential for pro- independence protests. In Manipur, on India’s border with Myanmar, there are a number of tribally based separatist groups (including the Maoist group) which require a constant counter-insurgency strategy by the Indian army. Both Pakistan and Bangladesh are semi-hostile territories from an Indian perspective. Both are desperately poor and sources of constant harassment along the border areas. India desperately needs a sustained high rate of economic growth to lift its millions to a higher standard of living. In this quest it probably lags at least twenty years behind China. But India does have one important advantage over China. It lies in the way much of the world perceives it as a well- intentioned democracy – chaotic but not inscrutable and malign.

East Asia with its total population approaching 2.5 billion people, spread over thirteen countries, is the most dynamic part of today’s world. With China as its dominant component it is inevitable that the Chinese traits will predominate throughout East Asia: a Mongoloid ancestry, a largely Buddhist religious orientation (with the exception of Islamic Malaysia and Indonesia and Catholic Philippines), a susceptibility to order, strong family ties, a pre-eminence of collective duties and responsibilities over individual rights and a strong work ethic. The East Asian countries with the most impressive success records are Singapore, Japan, South Korea, Taiwan and China (since the reform measures introduced by Deng Xiaoping in 1978).

Singapore, a fully functioning city-state, with its Chinese, Malay and Indian population has, under the “benevolent dictatorship” of Lee Kuan Yew, become a global hub for commerce, finance, shipping and travel. It is distinctive on many counts: it is stable despite its population’s diversity; its government is “meritocratic” but clean, efficient and committed to the national interest despite the lack of checks and balances in the form of a strong political opposition; it is socially conservative and maintains a much- admired schools, colleges and university system; it successfully assimilates newcomers since more than half of its resident population was not born there; the country has no slums or homelessness thanks to the Housing Development Board’s provision of housing estates funded by the Provident Fund into which a big chunk of everybody’s pay goes, allowing people to pay a deposit on a flat; the government has always favoured self-reliance and family support over welfare handouts; means- tested schemes are available to help the needy and low-paid; the jobless are channelled into “workfare” and training; immigration policy has been managed to maintain ethnic balance; strict laws prohibit speech or writing that might cause racial or religious offence. Since 1976, Singapore’s GDP growth has averaged 6.8 percent a year, unemployment has been kept at under 2 percent, its savings rate has been maintained at nearly 50 percent of GDP with investments averaging 30 percent of GDP,

iv its surplus on its current account stood at around 19 percent of GDP and its budget surplus was 5.7 percent of GDP for 2014. Could Singapore serve as a role model for a modernising China?

Japan’s post-war recovery strategy provided the template for other East Asian countries: industrial development and export promotion; macro-economic policy stability; interaction with the Western world’s technology, science and trade; encouragement of internal savings and investment practices; the combination of market forces with economic policy planning; emphasis on education, training and technology; promotion of an entrepreneurship culture combined with effective business networking; non-disruptive labour relations; and, an emphasis on maintaining law and order.

On the next level are the “aspirationals” such as Malaysia, Thailand and Indonesia. In Malaysia a crucial role was played by Dr. Mahathir who managed to keep its multi-racial and multi-religious population reasonably stable by a policy of moderate “positive discrimination” to advance the laggard Muslim Malay Bumiputra. Despite Mahathir’s efforts to foster racial harmony, his Bumiputra favouritism fostered a proclivity for rent-seeking and a culture of cronyism. The Chinese economic elite fled from Malaysia to Singapore in large numbers. Those who stayed behind in Malaysia relied on their resilience to survive and continued to form the backbone of the Malaysian economy.

Indonesia’s 250 million people are scattered over some 6000 islands and divided into around 300 ethnic groups, speaking close to a similar number of dialects. Today, Indonesia has the largest Muslim population in the world. Indonesia’s democratic transformation, known as the Reformasi, started after the end of decades of military dictatorship, first under Suharto and then under Sukarno. Parliament routinely rubber-stamped legislation sponsored by the military and put forward by the President’s cabinet. Today, Indonesia’s public life is closely linked to its “primordial” social structure: the rural peasantry, the secular aristocracy and the Islamic clerics. With its effective veto, the armed forces cast a long shadow over the country’s political life. The Indonesian Reformasi is still a work in progress and much needs to be done to reform its unreliable judicial system, its weak banking and finance infrastructure and its widespread corruption. It needs to build improved standards of public accountability, a dynamic non-governmental sector and to keep its domestic Muslim fanatics under control.

Thailand, with its 70 million population, is a Buddhist country that avoided colonial control by yielding Lao and Cambodian territory to France and Malay territories to Britain. Today, it is pro forma governed by a ceremonial constitutional monarchy, but in practice by a military regime. Since 1947 the country was periodically controlled by a succession of generals interspersed by a succession of coalitions orchestrated by army generals and business tycoons. The country has for decades been characterised by drug smuggling, nepotism and corruption. Endowed with many natural resources and fertile soil, Thailand has the potential to become a prosperous country if only it could improve its civil culture.

The laggards of East Asia comprise North Korea, Vietnam, Laos, Cambodia, Myanmar and the Philippines. Added together these countries are populated by more than 300 million people. The Philippines is a country with a population of around 100 million people, speaking more than 80 dialects, that is slowly stumbling into the modern age after being badly governed first as a Spanish, then briefly as an American colony, then under harsh Japanese military occupation and since World War II by a succession of bad and corrupt governments. North Korea, Laos, Cambodia and Myanmar are all plagued by authoritarian, military-based, corrupt, kleptocratic governments or megalomaniac leaders. They are all examples of the destruction of warfare (eg. Vietnam) and of utterly bad and

v corrupt systems of government. It remains to be seen if their proximity to the dynamism of China, Japan and South Korea could spill over to these distressed societies.

Looking at the world around us it should be clear that political life can only be properly understood if it is analysed in terms of the socio-cultural setting of the human beings who give it form and content. Society is, after all, the product of the interactions of individual and group relationships. Since ancient times the development of specific socio-cultural characteristics of communities and societies have been predominantly influenced by religion as a foundation for their moral values such as thinking and feeling about what is right, just, fair, preferable, true and universally compelling as ethical rules. No other determinant has acted as comprehensively in the formation of distinct socio-cultural characteristics in societies than their views of supreme reality and mankind’s role in its functioning.

It is instructive to imagine what the USA or Australia or Argentina would have been like if they were colonised by Muslim or Chinese immigrants. A host of influences came in the wake of the immigrants that settled in the USA, Australia and Argentina: values, belief systems, traditions, practices, knowledge, skills, institutions and other ways of doing things. The USA is often referred to as a “melting pot” nation, but the essential point to grasp is that the receptacle in which the various groups were “melted” was an Anglo-Saxon receptacle. The same applies to Australia but it does not apply to Argentina, which was not settled by British Protestants but by Spanish, Portuguese and Italian Catholics. Among the key elements carried along by British settlers are the English language, English concepts of the “rule of law”, the political philosophies of the Age of Enlightenment, Scottish banking practices, as well as Protestant values of individualism and dissent. These were the values Jefferson set forth in the Declaration of Independence and which became part of the American creed. Australia is described by some observers as “American lite” – to the chagrin of Australians.

Australia for many decades pursued a “White Australia” policy, excluding immigrants from “non- preferred” nations. It encouraged immigration especially from Great Britain, maintaining recruiting agents abroad and paying part of the passage of desirable immigrants. The consequence is that Australia is today an English-speaking constitutional monarchy with a parliamentary system of executive power, a legal system based on Common Law and a system of banking and finance modelled on the practices of the City of London. Australia subsequently allowed millions of immigrants from Central and Eastern Europe as well as from Asia in terms of a point scoring system taking into account age, qualifications and experience of potential immigrants as well as shortages in the labour market.

In today’s world every country has its peculiar socio-cultural template which determines its ways of doing things. Without it, a society would be a mere collection of people like marbles on a plate without cohesion or specific values, drifting without direction through the fads and fashions of world events.

1

1. Key Determinants of Human Affairs (February 2013)

Any systematic survey of human societies around the world reveals many similarities in patterns of behaviour. Most societies are in favour of peace and progress: a better future for themselves and their offspring. Each and every society has developed its own specific recipe to achieve a better future: some more successful than others for a variety of reasons. But a closer look at possible reasons for success and failure reveals distinct differences in the recipes or strategies followed. This does not imply that all societies have been strategising at some point, selecting specific courses of action or responding to specific calls of divine revelation. But all societies have been influenced by the effect of a variety of causal factors: natural, ecological, demographic, cultural, social, institutional, political and economic. The interaction and motive force of these factors determine the degree of success they achieve. Generally speaking, modern societies do not rely on the intervention of supra-natural forces, but most have been comprehensively influenced by religious traditions.

Within the analytical framework of social science the determinants of human achievements are interpreted as a combination of nature-nurture factors. Under “nature” is normally understood the particular combination of inherent qualities belonging to a person by birth: talents, abilities, instincts, characteristics, disposition and tendencies. Under “nurture” is normally understood the non-genetic external influences that modify, nourish, educate, train or condition individuals after their birth. Their accomplishments in the many spheres of life are determined by the interaction of their innate potential with the opportunities coming their way, whether structured, spontaneous or by chance. Some people of great potential have limited opportunities; others may not have the talents to exploit their opportunities or may simply squander their chances. Some are very fortunate, when, as Machiavelli said, “the goddess of fortune smiles their way”.

In the case of nations, countries or regions, similar forces are at work. A country’s economic fortunes are determined by a combination of natural endowments and human action, manifested by the interaction of its geography and its history.

Nature’s Endowments

The world is strewn with examples of nature’s inequality of “given” factors: latitude, climate, rivers and lakes, topography, mean temperatures, humidity, seafronts, mineral resources, arable land or soil quality. Nature’s unequal distribution of its favours is not easily remedied by human action, but humans can make a difference.

On a map of the world in terms of product or income per head, the rich countries lie in the temperate zones, particularly in the northern hemisphere; the poor countries in the tropics and semi-tropics. With a few notable exceptions, equatorial countries are largely stifled by problems associated with a low standard of living and a short life expectancy. The world shows a wide range of temperature patterns reflecting location, altitude and the declination of the sun. These differences directly affect the rhythm of activity of all species. Animals have adapted and evolved in their own way. Mankind generally avoids the extremes – unless driven by greed to exploit petroleum or minerals, or assisted by modern heating or cooling technology. In general the discomfort of heat exceeds that of cold. Year- round heat tends to encourage the proliferation of insects and parasites. Water distribution is also of critical importance for human habitation. Regular and predictable rainfall promotes the cultivation of food crops. Recurrent floods and droughts are serious constraints on agricultural development. It is no

2 accident that settlement and civilisation followed the main rivers of the world: the Nile, the Volta, the Indus, the Tigris and Euphrates, the Ganges, the Rhine, the Volga and the Mississippi.

Western Europe is a good example of the favourable conditions existing in the temperate zone. The privileged European climate was largely a gift of the Gulf Stream, rising in the tropical waters and then working its way in clockwise rotation bearing heat and rich marine life. This geological good fortune gives Western Europe warm winds, gentle rain, water in all seasons and low evaporation. Though not idyllic, these factors enhanced good crops, big livestock and dense hardwood forests. Europe’s climate is more equable along the Atlantic and becomes more “continental” as one moves east toward the Polish and Russian steppes with wider extremes of both moisture and temperature. Along the Mediterranean coast, the temperatures are kind, but rain is sparser and the soil yields less. Olive trees and grapes do better than cereals and pasture pays more than agriculture. Down its history, Europe knew famine and disease, long waves of cooling and warming, also epidemics, pandemics and bad crops. Europeans kept a rich diet in dairy products, meat and animal proteins. They grew taller and stronger while staying relatively free of worm infestations. Healthier Europeans lived longer and worked closer to their potential than communities living in tougher environments. By comparison with many other communities, Europeans were very lucky. (See David Landes, The Wealth and Poverty of Nations, London: Little Brown & Co., 1998, pp.17-22)

China ranks as one of the most successful human settlements in the world. With some 7 percent of the earth’s land area, it supports some 21 percent of the world’s population. For more than 2000 years, the peoples at the eastern end of the Asian steppes exchanged nomadic pastoralism for the higher yields of sedentary agriculture. Their leaders evidently saw the link between numbers, food and power. The Han people, as they called themselves, settled along the Yellow River and its branches where they cultivated rice, millet, sorghum, barley and later also wheat. As they moved south into the Yangtze basin and beyond, they found that the wetter, warmer climate, mild winters and long summers permitted double cropping: winter wheat and summer rice in submerged paddies. They kept animals for ploughing, hauling and as mounts for the army – and pigs as their primary source of meat. Sheep and dairy products were largely unknown. In the 17th and 18th centuries they added new plants from distant lands: peanuts, potatoes, sweet potatoes and yams. A labour-intensive, water-intensive model became an important feature of Chinese development.

The spread of substances obtained by mining has played a crucial role in economic growth around the world: bringing employment, regional development, trade and export growth. These substances occur in nature. Sometimes they comprise inorganic material, such as quartz, of definite chemical composition or aggregations of inorganic materials such as metal-bearing ores or rocks. Other natural products may be of fossiliferous organic origin such as asphalt, hydro-carbons or coal.

Because deposits of metals in rock are rare and difficult to extract, it took centuries before anyone worked out how to remove the material and then to work it into something useful. In time, someone found a place where there was enough metal-bearing ore or rocks to remove it to places where they could heat it in kilns to melt the metals contained in the ore or rocks. Once they found a way to pour and collect the metal the process called smelting and casting was discovered which made it possible to extract larger amounts of metal from the ore. All sorts of items such as tools, weapons and ornamental objects or jewellery could then be made of copper, tin, silver and gold. Metals such as copper and gold were easy to be worked as jewellery, but they made poor tools. The solution was to combine metals to make an alloy that was hard-wearing. Mixing copper and tin produced bronze that was tough, easy to work and could be sharpened. Liquid metal can also be cast in a mould to produce all sorts of complex

3 shapes. Casting became popular because it was easy to produce complex shapes. Since hammering hardened the metal, this method was used to make objects like tools and weapons.

Archaeologists have established that the use of copper was developed in Asia, the Balkans and Iberia where the metal was available in abundance around 9000 BC. By 6000 BC, smelting and casting developed in these areas. With the development of better trade routes, knowledge of metal-working gradually spread to other surrounding areas. By 2000 BC, bronze was widely used in Asia for everyday tools and weapons. The importance of bronze working led historians to call this period the Bronze Age. But bronze did not reach Australia, South America or many parts of Africa. In such places people may have used gold or copper occasionally; they mostly made do with stone technology.

Bronze was a useful metal, but not as hard as stone. Then around 1300 BC, some metalworkers in the Middle East discovered iron. Iron-working gradually spread throughout the Middle East and into Southern Europe. Iron weapons were used by empire builders such as the Hittites of Turkey to conquer new territory. The Greeks used iron weapons to build colonies around the Mediterranean and in India the use of iron made metal technology widely available. It enabled the Celtic people of Europe to protect their hill fortresses with iron swords during the Halstatt period. Archaeologists have found metalwork and coins made as early as 450-100 BC in Europe.

Human Action

The history of the world records the amazing progress of humankind, from the Stone Age to the Space Age. Looking into humankind’s development reveals the ideas, abilities and processes that created the modern world within the framework of available natural resources.

Civilisation today represents how far humankind has developed since the appearance of the first humans, or hominids in prehistoric times. By trial and error people acquired the knowledge and skills that would allow them to survive: which plants and fruits to eat, how to make weapons to hunt animals and protect themselves, to live safely in family groups, to develop special skills in a co- operative lifestyle, to plant seeds and to herd animals and to establish permanent settlements.

The process of civilisation gradually emerged as villages developed into towns and then into cities. Rulers with strong support conquered nearby regions and brought them under their control. Civilisation started at different times and blossomed at different tempos in various parts of the world. Some areas, such as the great plains of North America and some regions of the Middle East, Far East and Africa, did not develop civilisations because they could not be easily farmed. Soil types, distance from water resources, climate, all affected the nature of the civilisations that emerged in any particular area.

Warfare, exploration and the constant search for raw materials developed as trade increased between chieftaincies or principalities. New forms of warfare and weapons continued to develop as peoples such as the Greeks, Romans, Vikings journeyed through and around Europe as well as west toward North America. The Chinese explored eastern Asia and the Polynesians roamed the vast Pacific Ocean. The Mongols dominated Central Asia and from there penetrated South Asia and East Asia, spreading the Muslim religion.

From the 1500s, exploration and conquest became major factors in increasing the wealth of several European countries: Portugal, Spain, the Netherlands, France and Britain. These countries created

4 trading networks that reached across the globe. Explorers from these countries created maps of most of the world and probed into the unknown territories of North and South America, Africa and Asia. Traders, soldiers and priests followed in their footsteps – and empires were built and eventually lost.

(See Niall Ferguson, Civilization – The West and the Rest, Allen Lane, London, 2011, pp.1-18)

The Landes Paradigm

David Landes, in his remarkable historical survey called The Wealth and Poverty of Nations, examines the various factors that could possibly explain the divergent economic outcomes of the process of development in different societies. Some were much more successful than others. Landes acknowledges the importance of material factors such as climate, latitude, location and resources, but attaches much more importance to “nonmaterial” factors such as values (culture) and institutions. He further points out that such concepts as “values” and “culture” are not popular with economists who prefer to deal with quantifiable (or more precisely, definable) factors, but says “... life being what it is, one must talk about these things...”

On the basis of his survey of the experience gained in many countries in the course of history, Landes outlined what he called the “ideal case” – the society theoretically best suited to pursue material progress and general enrichment. He cautioned that this does not necessarily mean “better” or “superior”: it simply means “... one fitter to produce goods and services”. (See Landes, op.cit. pp.215-219)

Landes drew up a list of “ideal-typical” characteristics or standards a “growth-and-development” society would have to comply with. Such a society would be one that: “1. Knew how to operate, manage, and build the instruments of production and to create, adapt, and master new techniques on the technological frontier. 2. Was able to impart this knowledge and know-how to the young, whether by formal education or apprenticeship training. 3. Chose people for jobs by competence and relative merit; promoted and demoted on the basis of performance. 4. Afforded opportunity to individual or collective enterprise; encouraged initiative, competition and emulation. 5. Allowed people to enjoy and employ the fruits of their labour and enterprise.” (Landes, op.cit. p.217)

Landes then argues that these standards imply certain corollaries: gender equality (in order to double the pool of talent); no discrimination on the basis of irrelevant criteria (race, sex, religion, etc.); also a preference for scientific (means-end) rationality over magic and superstition (irrationality). He remarks that the tenacity of superstition in an age of science and rationalism is surprisingly common: it even beats fatalism. It is a resort of the hapless and incapable in the pursuit of good fortune and the avoidance of bad. It is also a psychological support for the insecure. Hence the persistent recourse to horoscopic readings and fortune telling.

David Landes also compiled a list of measures which “the ideal growth-and-development” government would adopt. Such a government, he suggests, would for example do the following: “1. Secure rights of private property, the better to encourage saving and investment. 2. Secure rights of personal liberty – secure them against both the abuses of tyranny and private disorder (crime and corruption).

5

3. Enforce rights of contract, explicit and implicit. 4. Provide stable government, not necessarily democratic, but itself governed by publicly known rules (a government of laws, rather than of men). If democratic, that is, based on periodic elections, the majority wins but does not violate the rights of the losers; while the losers accept their loss and look forward to another turn at the polls. 5. Provide responsive government, one that will hear complaint and make redress. 6. Provide honest government, such that economic actors are not moved to seek advantage and privilege inside or outside the marketplace. In economic jargon, there should be no rents to favour and position. 7. Provide moderate, efficient, ungreedy government. The effect should be to hold taxes down, reduce the government’s claim on the social surplus, and avoid privilege.” (See Landes, op.cit. pp.217-218)

Again, Landes adds additional corollaries to embellish the “ideal society”. The ideal society would be honest, not only enforced by law, but based on their generally held belief that honesty is right (also that it pays) and would live and act accordingly. The society would also be marked by geographical and social mobility. People would move about as they sought opportunity, and would rise and fall as they made something or nothing of themselves. This society would value new as against old, youth as against experience, change and risk as against safety. It would not be a society of equal shares, because talents are not equal; but it would tend to a more even distribution of income than is found with privilege and favour. It would have a relatively large middle class. This greater equality would show in more homogenous dress and easier manners across class lines.

Conceding that no society on earth has ever matched this ideal paradigm, Landes admits that “... it is designed without regard to the vagaries of history and fate and the passions of human nature.” Landes again: “... the most efficient, development-oriented societies of today, say those of East Asia and the industrial nations of the West, are marred by all manner of corruption, failures of government, private rent-seeking.” Landes claims that this paradigm nevertheless highlights the direction of history – that it outlines the virtues that have promoted economic and material progress. It remains to be seen to what extent development patterns around the world today show a resemblance to the historical trends implied by the Landes paradigm.

Divergent Patterns of Growth and Development

Britain was the first industrial nation to come close to the model of a “growth-and-development” society. It had the ability to transform itself and adapt to new things and ways of doing things. In particular, England had the precocity to increase the freedom and security of its people, to open its doors to migrants with knowledge and skills such as Dutch, Jewish and Huguenot refugees. Many newcomers were merchants, craftsmen, old hands of trade and finance and brought with them their network of religious and family connections.

The Industrial Revolution started in Britain, then changed the world and the relations of states to one another. The goals and tasks of political economy were transformed. The world was now divided between “... a front-runner and a highly diverse array of pursuers.” Britain became a commercial power of considerable potential and the principal target of emulation from the beginning of the 18th century. While was still a collection of squabbling Germanic principalities and France was recovering from the turmoil of the French Revolution, the British Empire was streaking ahead. It took the quickest of the European “follower countries” more than a century to catch up – and to surpass. (See Table 1)

6

Table 1 Estimates of Real GNP per Capita (Selected Countries in 1960 US Dollars)

1830 1860 1913 1929 1950 1970 Belgium 240 400 815 1020 1245 2385 Canada 280 405 1110 1220 1785 3005 Denmark 225 320 885 955 1320 2555 France 275 380 670 890 1055 2535 Germany 240 345 775 900 995 2750 Italy 240 280 455 525 600 1670 Japan 180 175 310 425 405 2130 Netherlands 270 410 740 980 1115 2385 Norway 225 325 615 845 1225 2405 Portugal 250 297 335 380 440 985 Russia 180 200 345 350 600 1640 Spain - 325 400 520 430 1400 Sweden 235 300 705 875 1640 2965 Switzerland 240 415 895 1150 1590 2785 UK 370 600 1070 1160 1400 2225 USA 240 550 1350 1775 2415 3605

(Based on figures provided by David Landes, op.cit. p.2322)

Socio-Cultural Factors

Economic historians like David Landes and Niall Ferguson highlighted the importance of “socio- cultural” factors as determinants in the affairs of men: in the ascendency of the West and now, also, in the advent of a new Eastern epoch. In his provocative Civilization – The West and the Rest, Ferguson argues that what distinguished the West from the Rest were the mainsprings of its power: complexes of institutions and associated ideas and behaviours. These are summarised under six headings: competition, science, property rights, medicine, the consumer society and work ethic. Ferguson denies that this is simply another self-satisfied version of the “Triumph of the West”. He argues that it was not just Western superiority that led to conquest and colonisation, it was also the fortuitous weakness of the West’s rivals.

Ferguson argues that the expression “West” is more than just a geographical expression. It is a set of norms, behaviours and institutions with borders that are blurred. It is possible that “Western” norms, behaviours and institutions could have been embraced by Eastern societies as many of them already seem to be doing. Much of “Western Civilisation” is at any rate based on influences derived from ancient sources in the Middle East and further South East and East. The West is simply the pre- eminent historical phenomenon of the second half of the second millennium after Christ.

It must be understood that socio-cultural factors, whatever their source, are universally powerful determinants of societal trends and behaviours. The “socio-cultural” parameters refer to the complex network of interactions between individuals and between groups within societies: their customs, beliefs, morals, habits, store of knowledge and ways of doing things. These characteristics are acquired simply by being members of society: by living together. The impact of these socio-cultural characteristics or phenomena cannot be easily quantified or validated by rigorous empirical research

7 methods. But it is simply a matter of observation that socio-cultural factors have real and comprehensive consequences for all aspects of societal life.

The Question of Providential or Divine Reality

Throughout history there have been numerous anecdotes of “miracles”: miraculous events have happened. But the causes of such events are not clear. For believers in a monotheistic supreme Providential Power or devotees of other Divine Powers, the explanations provided by the scientific study of the world around us are inadequate. Without necessarily denying the validity of scientific evidence or findings, they believe proper regard must be had to the role of Providential or Divine influence which overrides and determines all events and outcomes in the world of experience and physical phenomena. The various religions or belief systems use divergent explanatory systems to understand the scope and nature of Providential and Divine intervention in worldly affairs.

Over the centuries, some of the major religious movements have accumulated a large volume of theological literature to explain the dogmatic principles and precepts of their belief systems. They have also developed distinct rites and ceremonies to express their devotion to their belief systems. Of particular relevance is the question of knowing with any degree of certainty the extent of Providential and Divine intervention. How is this knowledge attained and how can its certainty or validity be established? Are these interpretations mere hypotheses, propositions of faith and belief or mere speculations not claimed to be intersubjectively transmissible knowledge? When it is contended that there is “knowledge” of factors – based on “intuition” or “inward seeing” or “belief” – which are beyond empirical-logical proof, the answer of the “scientific method” is not that there is no such knowledge. It is merely that we cannot intersubjectively prove it to be correct.

Belief systems are, epistemologically speaking, a completely different category of “knowledge” than science. Although all forms of knowledge can claim correctness, for any such claim to stand, it must be able to be verified, refuted, or must be left standing neither verified nor refuted. Since the Age of Enlightenment it has become the scientific convention not to blend scientific and religious arguments, because it was found impossible to prove the existence of Providence or the Divine in an intersubjectively conclusive manner. In the pursuit of scientia transmissibilis, religious “truths” can, at most, acquire the standing of a scientific “working hypothesis”.

To speculate about things unknown is an activity characteristic of human beings since ancient times. But in modern times even theologians have increasingly refrained from attempts to offer “scientific” proof for the reality of Providential or Divine Power. They tend to focus more on inner experiences and by “presupposing” or “assuming” the reality of the Providential or the Divine.

Many a scientist, if asked whether they believed in God, would most likely pose a preliminary counter- question such as, what is meant by God? If allowed to define the word God or Deity in his or her own way, a scientist could find a modus vivendi with believers. If the term “God” is meant to refer to any kind of supreme or supra-human being equipped with the power to think, to plan, to act and thought of as the creator either of the entire universe or, at least, of the moral world of the “good”, then the scientist is likely to be bound by the impossibility to prove the existence of such a God in an intersubjectively demonstrable and conclusive manner. However, if someone calls the universe itself, or the laws governing it, “God”, but denies that this God can think, plan, and act spontaneously, then the concept of Providence or Divinity could be used in a different sense than is commonly understood.

8

In line with the impotence of science to prove the reality of Providence or the Divine, all deductive arguments that start with the recognition of godliness and with the allotment of definite attributes to such a Providential or Divine Power, such as absolute goodness, absolute knowledge, and absolute power, have come to be considered as either “non scientific” or “extra scientific”. Any form of metaphysical order or ontological reality cannot be demonstrated intersubjectively in a conclusive manner.

Modern science has equally recognised that it is impossible to prove that there is no God and consequently, to disprove the absolute validity of ethical postulates founded in beliefs of Providential or Divine reality. The validity or not of such beliefs are beyond scientific demonstrability. But what science can do – within the parameters of its own methodological rules and procedures – is to study, analyse and demonstrate the impact or influence of specific religions on human behaviour. It can analyse and recognise the role such religion plays as a foundation for moral values such as thinking and feeling about what is right, just, fair, preferable, true and universally compelling as ethical rules. Since ancient times, religion has played a predominant role in the development of specific socio- cultural characteristics of communities and societies. It exerted influence specifically to their cultural ways of doing things: of courtship and marriage, customs and traditions, of ethical rules of behaviour and of socio-cultural priorities. A cursory glance at the international experience over the past three millennia shows that no other determinant has acted as comprehensively in the formation of distinct socio-cultural characteristics in societies than their views of supreme reality.

Bibliography

Ferguson, N. (2011) Civilization – The West and the Rest, Allen Lane, London Landes, D. (1998) The Wealth and Poverty of Nations, Little Brown & Co. London Olson, M. (1982) The Rise and Decline of Nations, Yale University Press, New Haven

9

2. The Role of Religion (February 2013)

Writing about religion is always fraught with pitfalls, because religion does not lend itself to casual rational analysis or discourse. It is based on what people believe about matters on which people hold strong convictions. History has only a few examples of people sacrificing their lives for the sake of a rational conclusion, but millions and millions of people have offered their lives on the altar of the beliefs they have held.

The text of this manuscript has been written with careful consideration for the sensibilities and beliefs of the faiths concerned. Despite the caution taken, it is possible that specific depictions are influenced by the convictions of the author. As such it may not always accord with the understanding of each and every reader. No offence is hereby intended.

At the outset it must be clearly understood that in this presentation, religion is regarded as a socio- cultural phenomenon that can be treated as the subject matter of scientific inquiry and objective analysis. This presentation is further based on the assumption that religion can be studied in a dispassionate way. In today’s world there is a huge range of intellectual tools which can be used to better understand the phenomenon of religion, archaeology, history, philosophy, psychology and even neuro-science.

Describing and analysing the role of religion is not meant to place religion in general or any specific religious belief under scrutiny. There are many arguments to be had over religions: whether supreme reality is a God, or not; the origins of “sacred texts”; the finer points of scripture interpretation; conceptions of the nature of God; the binding nature of religious commandments; is God’s creation a work in progress?; can a harmony be found between reason and faith?; what are the limitations of naturalism?; are mystical experiences hallucinations?: can practical reason, or scientific reason, or pure reason rule out faith as unreasonable? There are no atheological shortcuts to ending debate about faith. These issues, though important, are better dealt with in philosophy of religion texts.

Religion Defined

By “religion” is meant any belief system based on the idea that there is an omniscient, supreme (supra- human) deity or intelligence or force equipped with the capability to act as the designer, creator and mover of the entire universe, including everything in it – all natural and moral phenomena. It is necessary to realise that belief systems sometimes hold tenets which contradict one another. In addition, divisions within and between religious groups often lead to violent conflict and bloodshed.

Most religions are characterised by both dogmatic and ritualistic aspects. The dogmatic refers to perceptions of divine revelations and the ritualistic to the rites or ceremonies embedded in historical tradition but symbolically related to the beliefs held. Religions the world over show great dogmatic variation in the beliefs held and in the relative emphasis upon the ritual. The dogmatic and ritualistic elements of religion normally find expression in organisational structures such as churches, shrines and priesthoods. The interaction of followers and priesthoods normally become the conduit of the revelations of the founder. The priesthoods pass it on by some process of ritualistically sanctioned endowment involving training both of character and mind. Because the priesthood is a holy estate, it is characterised by certain taboos, such as the celibacy rule of Catholic priesthood and other monkish orders and by rituals such as sacramental worshipping ceremonies.

10

The major religions still active in today’s world made their appearance during the past 3000 years. Judaism became monotheistic; Zoroastrianism enveloped the Persian empire; Hinduism penetrated India; Buddhism arose to challenge Hinduism; Taoism and Confucianism was founded in China; Christianity spread from the Roman Empire into Europe; and Islam took root in the Middle East, North Africa and South East Asia.

We know that since ancient times, religion has played a prominent role in the formation and development of communities and societies. The most important of these roles are the following:

- offering an account of the origins and nature of reality and humanity’s relationship with it; - offering a basis for communal identity, social affiliation, cultural cohesion and territorial attachment; - offering a foundation for moral values such as thinking and feeling about what is right, just, fair, preferable, true and universally compelling; - offering a sense of sacred mission exerting a profound hold upon people’s emotions and imagination while providing a fertile source of social and political cleavage driven by assumptions of a divine or supernatural imperative.

Religion’s Origins

Speculative curiosity is a universal characteristic of human nature. Since ancient times human beings have tended to speculate about things unknown to them. But the methods of inquiry and the body of accumulated knowledge only advanced slowly and ambiguously. The scientific method of inquiry only emerged in the middle of the second millennium of the Christian era. Before that time, people had to rely on other sources of knowledge: their imaginations or illusions, their observations or experiences and the utterings or teachings of persuasive individuals among them.

Edward Tylor, a pioneer of social anthropology claims that the primordial form of religion was “animism” – the attribution of life to the inanimate. It means considering rivers, clouds or stars as living things and seeing living and “non-living” things alike as inhabited by (animated by) a soul or spirit. This “ghost-soul” or vaporous force infuses everything – rivers, clouds, birds, animals and people too – with animated life. Tylor’s theory rested on the view that the primitive mind is imbued with the “psychic unity of mankind” which is embedded in the universal human nature. He saw animism not as bizarrely inconsistent with modern thoughts, but as a natural early product of the same speculative curiosity that had led modern thought. Animism had been the “infant philosophy of mankind” assembled by “ancient savage philosophers”. It did what good theories are supposed to do: explain otherwise mysterious facts adequately.

In Tylor’s view, the hypothesis that humans have a “ghost-soul” handily answers some questions that must have occurred to early humans, such as: what happens when you dream? In many primitive societies people still believe that when people sleep, a dreamer’s “ghost-soul” wanders and has adventures the dreamer later recalls. The idea that the souls of dead people return to visit via dreams is widespread in primitive societies, even today. Animism also handles another enigma that confronts human beings: death. Death is what happens when the soul leaves the body permanently.

Tylor claimed that once early humans had conceived the idea of the soul, extending it beyond our species was a short logical step. They recognised the phenomena of life and death, health and sickness,

11 will and judgment in plants and animals and, not unnaturally, ascribed some kind of soul to them. Once a broad animistic worldview had taken shape, Tylor believed, it started to evolve. The notion of each tree having a spirit gave way to the notion of trees being collectively governed by “the god of the forest”. This incipient polytheism then matured and eventually got streamlined into monotheism: “Upwards from the simplest theory which attributes life and personality to animal, vegetable and mineral alike ... up to that which sees in each department of the world the protecting and fostering care of an appropriate divinity, and at last of one Supreme Being ordering and controlling the lower hierarchy.” (Tylor, 1866, quoted by Robert Wright, The Evolution of God – the Origins of our Beliefs, Little Brown, London, 2009, p.14)

Robert Wright maintains that Tylor’s theories recently lost some stature on the grounds that it makes the evolution of gods sound like an exercise in pure reason, when in fact religion has been deeply shaped by many factors, ranging from politics to economics to the human infrastructure. Modern cultural evolutionism places more emphasis on the various ways that rituals, beliefs and other elements of culture tend to spread and expand by appealing to non-rational parts of human behaviour.

But Tylor’s views still hold up well today in that it explains how early humans developed religious precepts out of their efforts to make sense of the world. Early humans did not have the benefit of the insights of modern science to give them a head start. They had to rely on their own pre-scientific insights and conclusions. Subsequently, religion has been shaped by a diversity of forces. As understanding of the world grew – especially as it grew via scientific discovery – religion also evolved in reaction. Thus, Tylor wrote, does “an unbroken line of mental connection” unite “the savage fetish- worshipper and the civilized Christian”.

Robert Wright maintains that at this level of generality, Tylor’s worldview has not just survived the scrutiny of modern scholarship, but drawn strength from it. Wright says that evolutionary psychology “... has shown that, bizarre as some “primitive” beliefs may sound - and bizarre as some “modern” religious beliefs may sound to atheists and agnostics – they are natural outgrowths of humanity, natural products of a brain built by natural selection to make sense of the world with a hodgepodge of tools whose collective output is not wholly rational”. (See Robert Wright, op.cit. p.15)

To understand Tylor’s animism-to-monotheism scenario, we have to imagine how the world looked to people living many millennia ago, not just before science, but before writing or even agriculture. There are no detailed records of beliefs that existed before writing. All that is left are the objects that archaeologists uncover: tools and trinkets and, here and there, a cave painting. The vast blank left by humanity’s preliterate phase, is today filled by the literature on hunter-gatherer societies.

Using hunter-gatherers as windows on the past has its limits. The anthropological record contains no “pristine” hunter-gatherer cultures that were wholly uncorrupted by contact with more technologically advanced societies. The process of observing a culture involves contact with it and it is well-known that many existing hunter-gatherer societies had been contacted by missionaries or explorers before anyone started documenting their religions.

Although observed hunter-gatherers are no crystalline examples of religion at its moment of origin tens of thousands of years ago, they are the best clues available to generic religious beliefs for the

12 period before around five thousand years ago. The anthropological record reveals at least five different kinds of hunter-gatherer supernatural beings: - Elemental spirits where parts of nature are considered to be alive, possessing intelligence and personality and a soul; - Puppeteers where parts of nature are controlled by beings distinct from parts of nature themselves; - Organic spirits where natural phenomena are considered to have supernatural powers such as snow-making birds, evil-predicting coyotes, etc; - Ancestral spirits where the spirits of the deceased are omnipresent and can do as much bad as good; - High gods where some godly being is in some vague sense more important than other supernatural beings or forces and is often a creator god.

The common element in all of these primitive perceptions of “gods” or “spirits” is that they purport to explain the otherwise mysterious workings of nature: why it snows, why wind blows, why thunder crashes, why dreams occur, etc. The dynamics of nature are explained in supernatural terms – at least in terms that today’s scientific world would consider as supernatural. The irony is that hunter- gatherers would not label their beliefs and rituals as “religious” or “supernatural”. The use of this terminology is a modern phenomenon. Ancient Hebrew, the language of the Old Testament, also had no word for “religion”. (See Robert Wright, op.cit. pp.17-20)

The World’s Religious Make-up

The predominant religions in today’s world are Christianity, Judaism, Islam, Hinduism, Buddhism and Confucianism. These religions have crossed national boundaries and their followers add up to millions of people and, in the case of Islam and Christianity, more than a billion. Other religions, such as Zoroastrianism, Shintoism, Taoism and Sikhism, are largely local and are inseparably related to small- scale traditional societies.

Although hard numbers in questions of faith are scarce, the Pew Research Centre, a “fact tank” in Washington D.C., issued a report in 2010 on the state of religious belief around the world. It estimates that around 5.9 billion adults and children – approximately 84 percent of the world population in 2010 – have some kind of religious affiliation. Even of the over 1 billion persons who are unaffiliated, many profess some belief in a higher power. Asia has by far the largest number of people who claim to have no religion, most of whom are Chinese – living in an officially atheist country. But 44 percent of Chinese respondents in the Pew survey claimed that they have worshipped at a gravesite or tomb in the past year. China also has the world’s seventh-largest Christian population, estimated at 68 million persons.

The report of the Pew Research Centre states that in 2010 of the 5.9 billion religious believers the distribution of religious affiliation was as follows: Christian 31.5 percent, Muslim 23.2 percent, Hindu 15 percent, Buddhist 7.1 percent, Folk/Traditional 5.9 percent, Jewish 0.2 percent. Other (including Sikh, Shintoist, Taoist, Janoist) 0.8 percent. The median age of religious groups was highest (between 30-40 years of age) for Jews, Buddhist, Folk/Traditional and “other” categories. For Christians the median age was 30 years of age, for Hindu around 25 and for Muslims around 22 years of age. Around 405 million persons adhere to folk religions (i.e. not to Abrahamic religions or to Hinduism or Buddhism). Around one quarter of the world’s believers live as religious minorities. (See The Economist, December 22nd, 2012, to January 4th, 2013, p.96)

13

Doctrinal Foundations

The doctrinal foundations of the Abrahamic religions Judaism, Christianity and Islam are all rooted in the Biblical Old Testament and crystallised over a period of many centuries. In the case of Judaism, the Mosaic texts reached written form much later than the actual historical period described in the texts. In the case of Christianity, which is based on the New Testament, the written texts were also consolidated centuries later than the actual events described. In the case of Islam, the Koran texts, based on utterances made by Muhammed, were written down much later than the events described. In all three cases the elaboration of doctrine and scholastic theology was a relatively late development.

Whereas Judaism is a particularistic, ethnic-centred religion focused on a strict definition of Jewish people as descendants of Israelites, both Christianity and Islam, are universalistic in appeal and has been carried to the far corners of the world by messianic missionary zeal. Many questions remain. Who wrote the Bible or the Koran? When was it written? Are the Bible’s and the Koran’s accounts of creation and history correct? What parts should be interpreted figuratively?

Hinduism, the oldest and most enduring of the Eastern group of religions, had no founding figure like Christianity or Islam. It is deeply rooted in and has organically grown on the Indian subcontinent over a period of at least three millennia. Many primitive aspects survived besides more highly developed philosophical systems. Like Judaism, it has a distinctly ethnic character and does not focus on being spread to other cultures.

Buddhism is more missionary in focus and has spread to various parts of East Asia. Buddhism is a name comparatively recently given by Westerners to the vast synthesis of teachings more than 2500 years old of a man called Siddhattha who was born in India. At the time of the original Buddha, there were no known materials suitable for writing or engraving so that most of his teachings had to rely on human memory and word-of-mouth transfer. It is not clear what proportion of the original message might have been lost or changed in the transfer process.

Taoism and Confucianism are both considered as religions and philosophies. Both are traced back to the period 450 to 550 BC. They originated in ancient myths and practices through the teachings of famous scholars such as Tson Yen, Yang Chu (from the schools of Lao-tzu and Chuang-tza) who were some of the original interpreters of Tao, the source of all being and not-being. What Westerners refer to as “Confucianism” is not a religion in the strict sense, but a traditional view of the propriety and a code of manners to be respected by the Chinese gentry. Over the past two millennia, various schools of Confucianism have given revised interpretations of what such an exemplary life should entail.

Judaism

Judaism originated in the Biblical period of the Old Testament, around 3000 years ago and was gradually consolidated by Rabbinic Councils and enshrined in the Talmud. The first part of the Old Testament known as the Five Books of Moses (the Torah or Pentateuch), are considered by Jews to be a direct and most fundamental divine revelation as delivered to Moses by Jahveh (God) on Mount Sinai.

The gist of Judaism is the concept of Yahweh as the creator of heaven and earth, transcendant and free in absolute sovereign mastery. Yahweh took Israel unto himself as his chosen people through a

14 covenant that imposed responsibilities on them and constantly exposed them to divine judgment. This led to a direct connection between sinfulness and disaster as punishment. But equally important is the promise of deliverance and redemption if they follow Yahweh’s commandments. Every part of a Jew’s life and body is under a divine charge.

In the tradition of Jewish orthodoxy, every letter of the Pentateuch is hallowed. This tradition of Jewish orthodoxy makes it difficult for its devotees to come to terms with modern Biblical scholarship and the findings of archeological research. What is fact and what is fiction? Of the several historical accounts available, which are accurate? The form in which the Old Testament is available today, is a compilation of several “sources” and “codes”. Who transcribed the ancient texts and how accurate are they? The Hebrew Bible took shape over several centuries and the order in which it was written is not the order in which it now appears. It is not clear how many tribes of Israelites went to Egypt nor when they returned. There are also major disputes surrounding the identity and even the very existence of Moses as a historical figure. There is more than one Biblical canon: the Masoretic text, the Samaritan text, the Greek text (hence the name Pentateuch and the Greek names of the first books) and the Dead Sea Scrolls copied by the Qumian sect.

Today the Bible serves as a key document to reconstruct most of the Israelite religious history. It is a religious interpretation of their historical experience – not a prima facie record of actual history. It provides a religious interpretation of the Israelites’ experience and their perceptions of their interaction with Yahweh.

Judaism has a complex relationship with the Jewry. Judaism cannot survive without Jews because only Jews or persons converted to Jewry can become Judaists. Judaism has for centuries been handed down by many generations of Rabbis. Throughout, Rabbis have acted as the interpreters, articulators and guardians of the Judaic Torah. They decided who qualified to be a Jew and what the text of the Bible meant. This has given Judaism an exclusiveness which is inevitably frowned upon by other communities.

After the middle of the second century AD, the diaspora of the Jews took them to many countries. Everywhere they distinguished themselves as an isolated, closed community of assiduous traders with wide-spun family, religious and financial networks. As traders they became stockpilers, hoarders and accumulators of money. They developed a reputation as money lenders which raised the question of usury. They were permitted by the Bible to charge interest rates on loans to Gentiles, but not to other Jews. Thus their charging of interest became synonymous with something hostile. It became calamitous for Jews in their relations with the rest of the world: they were disliked and mistrusted. As a result of their concentration on money lending to make a living, the Jews became an element in a vicious circle of financing activities and being disliked. It gave Jews a bad name and gave rise to anti- Semitic outbursts wherever they settled all over the world. The prime examples are the burning at the stake of thousands of Jews under the Spanish Inquisition and the ultimate banning of all Jews from Spain at the end of the 15th century, the ghettoing of Jews in Italy, Poland, Germany and Russia and ultimately, the horrible extermination camps in Nazi-Germany in the period 1940-1945. During the centuries of persecution, many thousands of Jews converted to the local religions in order to survive and prosper. Two examples of famous converts to Christianity were Benjamin Disraeli, who became Prime Minister of England and Karl Marx, who became the founder of the communist ideology.

In 2010, the world’s total Jewish population stood at around 13.6 million. Of the total, 5.7 million lived in Israel, 5.3 million lived in the USA, 483,000 in France, 375,000 in Canada, 292,000 in Britain,

15

205,000 in Russia, 182,000 in Argentina, 119,000 in Germany, 107,500 in Australia, 96,000 in Brazil, 72,000 in Ukraine, 71,000 in South Africa and the remainder in Hungary, Mexico, Belgium, the Netherlands, Italy and Chile. In the USA, the Wall Street world of finance is largely dominated by Jews and so is the City of London.

Christianity

Christians believe that the Old Testament is “Gods Word” and is a record of his ordering of history. To the body of Jewish scriptures was added the writings of the disciples of Jesus Christ and the Epistles of St. Paul. Lists of “canons” were drawn up by the 4th century in the form of the “New Testament”.

Christians believe that Jesus of Nazareth came to earth as the “Messiah” that was promised in the Old Testament. As the “Son of God” Jesus was to become, through his death by crucifixion, the “divine agent” to remove the barrier of sin between man and God in a victorious struggle over the powers of evil. Thus the Christian community had become the righteous heirs of Abraham, the New Israel. It follows that the Christian community, composed of Jew and Gentile alike, appropriated the role of the Jewish people as God’s chosen people.

In essence, a Christian Creed involves the following components: - a belief in one God, the Father almighty, maker of heaven and earth, of all things visible and invisible; - a belief in one Lord Jesus Christ, the only begotten Son of God, who came down from heaven, and was incarnate from the Holy Spirit and the Virgin Mary and was made man, and was crucified under Pontius Pilate, and suffered and was buried, and rose again on the third day and ascended to heaven, and sits at the right hand of the Father, and will come again with glory to judge living and dead, of whose kingdom there will be no end; - a belief in the Holy Spirit, the life-giver, who with the Father and the Son is worshipped and co- glorified, who spoke through the prophets; and in one holy all-embracing Christian Church; - confession in one Baptism for the remission of sins, looking forward to the resurrection of the dead and the life of the world to come.

Since the Reformation started in the 16th century, Christendom split in two major branches: Catholicism and Protestantism. Whereas the Catholic tradition was based on hierarchy and the authority of the Popes, Cardinals and Bishops, the Protestant emphasis was on reading the Bible and on the individual’s personal relationship with his or her concept of God. Luther called this the “priesthood of believers” and this democratic spirit coloured most of the other Protestant sects that came after him. Calvinist congregations, Baptists, Unitarians, Presbyterians, Methodists and many others, ran their own churches and selected their own clergymen. Protestantism, once the Bible was freely available in a homeland language, tended to foster the kind of debate and discussion which is the core of democracy.

Islam

Islam is the proper name of the religion traditionally called Mohammedanism in the West. It is based on the revelations uttered by the prophet Muhammad (Mohammed) who lived in Arabia around 570 AD to 632 AD. His revelations were collected after his death in the volume known as the Koran (Qu’ran). From the Koran, supplemented by statements and rulings traced back to Muhammad, a system of law and theology were derived in subsequent centuries. These combined with elements and

16 precepts of Judaism and from other soures to form a distinctive Islamic civilisation which has continued to grow into modern times.

The Koran is the source of the guidance and instructions required by all Muslims for their daily lives: obligations of prayer, alms, fasting and pilgrimage; the definition of the basic institutions of marriage, divorce and inheritance; and the outline of the general structure of law. Muhammad preached that all men and women must surrender their will to Allah. In the manner of Christians they preached that a day of judgment would come and that they should so order their lives that they should not be judged unfavourably by Allah and thereafter be punished in hell with all its terrors.

After the death of Muhammad, the community of Islam was involved in a civil war over succession. The majoritry faction is called “Sunnis”. Opposed to them is the “Shia” who are now concentrated in Iran. The common interest of all Muslims requires of each believer to join with other members to “strive in God’s path” for its defence against internal and external enemies. This “Holy War” (jihad fi sabil Allah) has taken different forms in different ages. In recent years a range of secret jihadist networks have appeared, particularly taking advantage of the West’s civil rights guarantees of freedom of conscience, assembly and speech. In this way they are able to spread hate-filled messages and create fifth-column activists within Western societies.

Throughout today’s Islamic world, there are two opposite trends competing for ascendancy: Islamic theocracy propagated by fundamentalists at the one end of the spectrum and liberal democracy, propagated by a smaller contingent of securalists at the other end. The Islamic theocracy movement is currently the most prominent – even after the “Arab Spring” of recent years. The momentum of secular liberal democracy is sporadic and faces many obstacles. Islamic theocracy has several obvious advantages: their messages are cast in simplistic religious terms, are easy to communicate by sloganeering and they have easy access to the communications network around mosques and thus bear the authentic stamp of Islam. Secular democrats are committed by their own ideologies to tolerate the propaganda of their opponents, whereas the religious parties, such as the Muslim Brotherhood, have no such obligation. In fact, they go to great lengths to persecute secular or democratic views. Islamic theocrats diagnose the ills of the Islamic world as due to infidels and their loyal immitators and declare it the sacred Islamic duty to crush the anti-Islamic secular movements.

Hinduism

Whereas Abrahamic religions generally teach that man is a special creation, possessing an immortal soul which is denied to lower animals, Hinduism maintains that all living things have souls, which are essentially equal and are only differentiated through karma, or the effects of previous deeds, which conditions successive re-births in different types of body. This is the doctrine of samsara which has given a very distinctive character to much of Hindu thought and philosophy. All life - supernatural, human, animal, insect, or even with some sects, vegetable – is governed by the same law: there is no absolute beginning or a unique process leading to an end salvation. The world is eternal and is constantly renewing itself. There is no particular being as creator or as saviour. Different systems of thought or cults have equal validity. The paths to salvation are many and varied.

There are a few central tenets or guidelines which are not arbitrary: - the cosmos is an ordered whole ruled by a universal law (karma) represented by a strictly hierarchical caste system and its purity laws; - the cosmic cyclical periods (calpas) are constantly ending and beginning again;

17

- the natural order also acts as a moral order.

There are three powerful gods in Hinduism - Brahma, Vishnu and Shiva – each representing different aspects or forms of the one original being in its activity as creator, preserver and destroyer of the universe. Brahma is the architect of the constantly changing components of the world. Vishnu, the preserver, embodies the principle of preservation through ethical and heroic deeds. Shiva embodies both creative and destructive forces: storms, illnesses and death. In addition, there are also countless other deities in Hinduism as well as a complex network of cosmological units such as upper-worlds, netherworlds, hells and empty spaces. Time has no beginning or end. Depending on their karma, souls endlessly and repeatedly are re-incarnated.

The Ganges River itself is the utmost symbol of what Hinduism means. It is a gift of the god Brahma to the Indian people and its flow symbolises the circulation of life, death and re-incarnation over billions of years. Throughout India, temples abound as wayside shrines. Hindus invest much time and energy in their elaborate decorations, paintings and carvings. They are monuments to the process of renunciation – letting go of earthly things since you cannot take your wealth with you.

Buddhism

Buddhism is based on the insight that life means suffering and is painful because it is subject to illness, ageing and death. Life itself and the world around us are characterised by impermanence, an incoherent whole composed of a combination of parts that are constantly changing – combining, dissolving and recombining. Every individual is subject to this constant process of becoming and dissolving. The constituents of this process are subject to a strict set of laws (dharmas) which endure beyond death and form new combinations creating new “individuals”. There are links in the chain of aggregates (dharmas) and knowledge of these connections can usher in deliverance. The entire cycle of unhappiness begins with ignorance (the first link). The death of an ignorant person is inevitably followed by their rebirth. Only the final death means entrance to nirvana. A better rebirth can be brought about by good deeds (like the Hindu kharma) – a worse one is risked by evil deeds. What counts is not so much the deeds as the motives behind them. Any action guided by reason (and avoiding desire, hatred or envy) will promote salvation.

The philosophy of Buddhism has been described as “dialectic pragmatism”. It implies a theory of salvation where all activities, attitudes and motives are aimed at a rejection of worldly gain by embracing self-denial. The “Four Noble (or Holy) Truths” are considered to be a means to reach salvation and a way of promoting contemplation and self-discipline. These consist of the answers to four decisive questions: - What is suffering? - What is the origin of suffering? - How can suffering be eliminated? - What is the path to eliminate suffering?

Buddhism offers an “Eightfold Path” to overcome suffering: the right view (understanding), the right directed thought (intentions), right speech, right action, right livelihood, right effort, right mindfulness and right concentration.

The historical Buddha was actually named Siddharta or also known as Gautama. He came from a noble North Indian family and is said to have lived around 560-480 BC. He gave up his comforts and fortunes

18 to set his “Wheel of Teaching” in motion: renunciation of worldly things and dedicated to teaching how to overcome suffering. He left nothing in writing and his teachings were handed down in oral form. The main branches of Buddhism are the Hinayana (the southern branch prevalent in Sri Lanka, Myanmar and Thailand), the Mahayana (the northern branch prevalent in Tibet, Nepal, China, Korea and Japan).

Taoism

Tradition has it that Lao-Tzu (meaning “old master”) refers to a mysterious ancient figure who lived between 604 and 517 BC in the village Chu Jen (Hu province). His real name is uncertain and the writings attributed to him, the Tao-te Ching (Tao means world, hence “The Book of the Law of the World”), is considered one of the most seminal works of Chinese thought. The manuscript contains 81 short chapters, several of them in rhyme, and in the form of aphorisms. It offers definitions of Tao and the te (the powers or virtues of Tao) which is why the book is called “The Book of Tao and Te”.

Tao is the eternal source of all being and the force underlying everything – also as the law governing the world and the ethical guide for correct action. It is an eternal Ultimate Oneness, the highest principle of the natural and social world, the path, natural order – but at the same time nameless and indefinable. It can be understood as the universal law or the will of a Supreme Being – not as a static ideal, but as an active force. Tao is the primal source from which everything was formed. Out of the transcendental non-being, being emerges. This being, the Ultimate Oneness generates, within itself, the duality of Yin (dark, female) and Yang (light, masculine). From the dualism of this principle, the breath of life is formed, which brings about harmony between the two antagonistic forces. This triad of Yin, Yang and the breath of life then generates the Many (thousands of beings). In this way Tao is the source of all beings, nourishes them with its power and brings them to completion. The many opposites appearing in the world (good/bad, heavy/ light, long/short, etc) are mutually interdependent, but all require the harmonising power of Tao.

There are many interpretations of Tao-te, including wu-wei, which asserts that ideally people should withdraw from earthly things in order to live in harmony with Tao – in contemplative immersion of oneself in the conciliatory calm of self restraint.

Popular Taoism is more inclined towards the mystical and magical than the philosophical. It created a hierarchy of deities: YuChing, the god of heaven; Tai-chi, the personified Tao; and the deified Lao-tzu. These three gods of good fortune help people who trust in them, while in the underworld, the ten “Kings of Hell” reign. In time, popular Taoism developed an ascetic attitude amongst monks – leading to the monastic ideal of “creative non-action”. Other interpretations proliferated so that by the 15th century the Taoist canon included 5318 works.

Confucianism

The name Confucius is the Latinised form of K’ung-Fu-Tzu, which means “Master Kung from Fu”. His personal birth name was Ch’iu Chung-ni and he was born around 550 BC, son of a military commander. He was well-educated and showed a keen interest in China’s spiritual traditions. He worked as a teacher and counsellor and rose to become a Minister of Justice. He later resigned and wandered the country with his disciples for 13 years and then returned to his home in the dukedom of Lu where he died in 470 BC. After his death he was honoured as a teacher by the Han Dynasty. Centuries later he was honoured with the erection of a temple in each prefecture of China. In 1086 he was posthumously

19 accorded the rank of a Chinese emperor and in 1906 accorded by imperial decree the same status as the deities of heaven and earth.

Confucius mainly wrote works on moral philosophy and state theory. His “writings” include contributions by his pupils as well as later followers. There are 5 canonic books attributed to Confucius: the “Book of Changes”, a book of Songs, the “Book of History”, a book of “Annals” and a “Book of Rites”. There is also a collection of conversations between Confucius and his pupils, moral commentaries and writings by his student Mencius. Confucius’ philosophy of ethics starts with the assumption that man is by nature good and that all evil is the result of lack of insight. Educating people to understand virtue and harmony is of supreme importance. Holy and wise men should be regarded as role models, which is why respect for parents and ancestors is essential. People should be educated to respect truth, goodness and generosity, to nurture family relationships and maintain polite social manners. He also preached that the “Golden Rule” (reciprocity of treatment) should be followed as a guiding principle in human affairs.

Confucius did not evoke divine authority, commandments or revelations to support his moral principles. It is a rationally conceived, autonomous ethical system devoid of any metaphysical underpinnings. He was not opposed to religion but he avoided all speculation on the transcendental. He did, however, place particular emphasis on cosmic harmony.

As Confucius made no reference to divine beings, his teachings are not considered as a religion but as a corpus of moral philosophy. But the cults and rites of worshipping Confucius developed by his many followers and admirers have turned Confucianism into a religion. Chinese thought distinguishes between chia (philosophy) and chiao (religion), but also recognises the connection between the two. Confucius is considered by the Chinese as a Ju (scholar) and thus as an expression of China’s intellectual culture.

Confucian social theory revolves around the central concept of jen, which means “humanity” and consists of five “virtues”: dignity, generosity, loyalty (integrity), hard work and charitability. Afterwards he also added moderation, equanimity and honesty.

Religion as Source of Values

We consider things as “valuable” or “desirable” as instrumental values when they are conducive to achieve some purpose which we pursue. But if values are sought or appreciated for their own sake, as ultimate ends, irrespective of any purpose, they are intrinsic values. When it comes to the question which ultimate evaluations are right and which are wrong, which are laudable and which are reprehensible, we are faced with a conundrum. Science, per se, cannot take a stand beyond describing the consequences. The scientific method cannot help us to take an unconditional stand in matters of value judgements.

Some philosophers, such as Kant, thought that values must rest ultimately on natural laws, knowledge of which can be acquired through reasoning. Existentialists like Sartre and Camus argued that values simply rest on preferences and reject any form of ultimate anchoring outside personal preferences. Others, like the German philosopher, Arnold Brecht, also supported the view that ultimate values are based on preferences, but argued that certain kinds of preferences are universal, invariant “inescapable” elements in the human way of thinking about ethical issues, particularly about “justice”. Followers of various religions, anchor their values – distinguishing between right and wrong – in their

20 religion: God’s will as expressed through one or other form of revelation. Variants of this view are found among Christian and Islamitic theologians. They derive their doctrines of “what ought to be” from religious sources. This source of values applies to the majority of people living on planet earth.

Religion and Art

Since ancient times religions have inspired man to express their feelings in artistic ways: in architecture, sculpture, painting, music and in poetry. Each form of artistic expression has created works of genius depicting religious themes and representations of religious events, symbols and figures. All over the world there are imposing edifices, mostly in the form of monuments, temples, cathedrals or churches, reflecting the creative designs of specific cultures and periods. Some of the earliest examples are the temples of Mesopotamia, the Hindu temples in South India and the geometric art of the ancient Greeks.

The great number of gods and goddesses in the Hindu world provided multiple pathways for Hindus to Brahma, the architect of many worlds. Hindu representations provide a rich tapestry of art in the form of architecture and sculpture. Temples abound like wayside shrines and Hindus invest much in their elaborate decoration. Hindus believe you can’t take wealth with you. Hence, letting go of things lies at the heart of Hinduism. Temples and shrines are monuments to renunciation – giving up of worldly things. Hindu art reflects their belief that everything is symbolism.

The Romans developed the arch, vault and dome, but also pioneered the creative use of concrete, allowing them to cover immense interior spaces without inner supports to build landmark cathedrals like St. Peters in Rome and the “Hagia Sophia” in Constantinople. The Roman Catholic Church established a wave of cathedral construction throughout feudal Europe during the Middle Ages and the Renaissance period. Many examples exist of Romanesque and Gothic styled cathedrals: St. Trophime at Arles, Reims Cathedral, Sainte-Chapelle, Chartres Cathedral and the Cologne Cathedral. Medieval theologians believed a church’s beauty could inspire parishioners to meditation and belief. As a result, churches were much more than just assembly halls. The chief forms of inspirational decoration in Gothic cathedrals were sculptures, stained glass and tapestries. The Baroque period of the 17th and 18th centuries produced numerous churches, designed to overwhelm the senses and emotions with architecture of unprecedented grandeur. (See C. Strickland, The Annotated Mona Lisa – Art History from Prehistoric to Post-Modern, Andrews and McMeel, Kansas City, 1992, p.30)

One of the most talked about, exotic church designs in the world is the Sagrada Familia by the famous Barcelona architect Gaudi. His version of primitivism combined with Art Nouveau astonishes from whatever angle it is viewed. Initially it was intended to be a conventional building in the Gothic Revival style, but it became more exotic as it grew, and after many decades it is still under construction. In the chronology of tall buildings it is noteworthy that the cathedrals of Cologne and Rouen were the tallest buildings in Europe in 1880. Today the tallest church spire is dwarfed when seen from the rooftops of modern skyscrapers.

Sculpture also represents an age-old form of religious artistic expression. There are many examples all over the world: the Venus of Willendorf, the monoliths of Easter Island in the Pacific Ocean, the statuesque figures in the Greek temples, the revival of the Greek tradition in the sculptural masterpieces created by Michelangelo: the “David”, the “Moses” and the “Pieta”.

21

The Renaissance period saw a revival in the use of paintings in a religious context. By the use of the techniques of perspective and light and shadow depiction, artists were now able to create the illusion of depth and reality on flat surfaces. It enabled artists to tell stories in their paintings covering religious themes that were in popular demand. Italy produced a number of exceptionally gifted painters who left a wealth of paintings with religious themes: Michelangelo’s paintings on the ceiling of the Sistine Chapel, depicting the biblical story of Genesis, Raphael’s Stanza, depicting a range of religious themes, Leonardo’s “Last Supper” depicting Jesus Christ and his apostles, Michelangelo’s depiction of “The Last Judgment” on the altar wall of the Sistine Chapel. This tradition was carried forward by Spanish, German and Dutch artists. Of particular relevance is the Dutch painter Hieronymus Bosch who deviated from telling biblical stories to depicting the evil consequences of sinful behaviour. His “Garden of Earthly Delights” which is displayed in the Prado, Madrid, is one of the most remarkable paintings of the Renaissance period.

All the ancient civilisations – Chinese, Indian, Inca, Aztec, Egyptian, Greek, Roman – had more or less the ability to divide the different sounds they could make into higher and lower pitches, so tunes could go up or down. Modern music derives a good deal of its basic theory from the Greeks, calling the notes by letters of the alphabet in a primitive form of notation. But it was the Roman Catholic Church that set about codifying music systematically in the early Middle Ages. Up to that stage, virtually all music was part of a handed-down, oral tradition – most of which was improvised and never heard of again. Pope Gregory (540-604) was the first to order a compilation of and standardisation of the entire chant repertoire, but they could only write down the words, and the tunes had to be memorised by the monks. Then a Catholic monk, Guido of Arezzo came along in the 11th century and provided a map for musical notes – a virtual representation of sound – suitable for instant recognition of the relative pitches of notes by using his “sol-fa” system. He offered a model for the modern “scale” or musical ladder and his clear stave notation changed the course of musical history. It was now possible to write down a sophisticated variety of music and paved the way for the emergence of a new, distinct species of musician, the composer: Bach, Handel, Vivaldi, Haydn, Mozart, Beethoven, Chopin, Schubert, Brahms and many others. The great composers of the baroque period were all closely associated with church life. They composed biblical oratorios that have remained monumental standards of classical music ever since. It is also claimed that the biblical oratorio paved the way for the emergence of the art of opera. Bach’s collection of keyboard compositions in the form of preludes and fugues in all the major and minor keys was an amazing feat and a landmark of European music history and among the most important pieces of music ever composed. Bach spent most of his adult career as music director of the St. Thomas Church in Leipzig. Much of Western music written since the time of Bach, follow the lead of the “Equal Temperament” of voices and instruments he pioneered. Without it even popular music could not function.

Religion as Source of Cleavage

Since ancient times religion has served as a binding force within communities – binding together people with the same values and aversions. Simultaneously religion served throughout history as a mark of distinction, giving rise to tensions and hostilities. The profound hold which religion is capable of exerting upon people’s emotions and imagination render these cleavages especially intractable. A common religion can produce both a militant cultural identity and a sense of sacred mission. Where religion regards sacred and secular issues as inseparable, co-existence of different religious communities within the same area or state becomes peculiarly difficult.

22

Often certain aspects of religious membership are of high visibility to the community at large. The Sikhs in India are identifiable by their uncut hair, bound up in a turban. This distinctiveness assures a consistent reinforcement of both a sense of identity with their group and its uniqueness with regard to other groups. The same applies to the head scarves, burkas, hijabs or other veils used by Islamic women. They act as conspicuous differentiating factors which also could act as annoyance to other groups. Religious taboos, especially dietary, may also provide a mark of differentiation. Throughout history, the most violent religious conflicts were those between Christian factions such as the Catholic- Protestant cleavage, the Muslim-Christian cleavage such as during the Crusades in the Middle Ages, the cleavage between Shiite and Sunni Muslims and the Muslim versus non-Muslim conflicts in the Sudan and Indonesia.

There are many examples around the world where cleavages within the population (religious or other) threaten the breakdown of the state as an integrated political system. The breakdown of the political system could occur through the withdrawal of a segment of it, either to become independent or to join another territory – such as the break-up of India to create East and West Pakistan and, subsequently, Pakistan and Bangladesh. In most instances, such breakdowns occurred as part of post- colonial separatist drives or secession movements. In a few instances, irredentism, served as a drive to combine or unite areas to create a “homeland” for a religious or cultural community. The Malaysian was split up to accommodate the separatist sentiments of Malay Muslims and Singapore Chinese. The Armenian cultural group pressed for the creation of an Armenian state to combine the Armenians living in Turkey, Iran and the USSR. The Kurds are still pressing for the establishment of a Kurdistan to combine Kurds scattered in Iraq, Iran, Turkey and Syria. In each of these instances, religious affiliation plays an important part.

The Middle East inherited the demarcation of several artificial or arbitrary political units which underlies endemic political rivalry and conflict. In Libya, Iraq, Jordan, Israel, the most salient identity has been supra-national rather than sub-national. The result is that cleavages cannot be easily solved by sub-national partitioning. In Lebanon, one of the world’s most culturally (and religiously) divided states, the cleavages are not geographical and therefore cannot be resolved by the simple expedient of fragmenting the state. The cleavages between Sunnite, Shiite, Druze Muslims, Maronite, Greek Orthodox, Greek Catholic and Armenian Orthodox Christians are not geographical. Re-creating an Arab Palestine distinct from Israel is an equally complex challenge with part of former Palestine precariously glued to Jordan and another part, the Gaza-strip, attached to Egypt on the opposite side of the artificially created state of Israel.

Although Buddhism is also divided into major rites such as the Hinayana (Thailand, Cambodia, Laos, Burma) and the Mahayana (Ceylon, Nepal, Tibet, Mongolia, Japan), these divisions did not occur within territorial boundaries and do not lead to domestic conflict. However, where sects existed within Buddhism inside countries such as Thailand, Cambodia, Vietnam and Ceylon, it became the source of violent social cleavage.

Perceptions of Deity, Heaven and Hell

In Judaism, God is called Yah-weh, but also other names, Jehovah, Adonai, El or Elohim. The fact of being without beginning or end, God is the Creator of everything from nothing, but also the Saviour at the end of time, and the omnipresent actor in history. Despite the omnipotence and omniscience of God, people are responsible for their actions. They have the responsibility and capacity to make choices. They also have the power of reason, the ability to understand the ethical order of the world and to

23 direct their actions in accordance with its laws. Since all people are made in the image of God and are God’s creatures, the rights of the individual are limited by the rights of others. Mankind’s task is to actively shape the world according to God’s laws. Sin is rebellion against God’s law and the Divine order. Suffering, however, is a mystery in Jewish faith. It can be experienced in three ways: as punishment, as a test of faith or as the atonement suffering of the righteous.

For Christians, God is conceived as a trinity: God the Father, God the Son and the Holy Spirit – all aspects of the divine. God the Father is seen, similar to Judaism, as the creator of all things and is the Lord of history and of judgment. God the Son (Jesus Christ) is the centre of Christianity and is connected with the salvation of creation and the redemption of humankind. The Holy Spirit, the most difficult to comprehend, can be recognised through its actions and it is the source of the power of the church and its sacraments. The paradox of theodicy is how and to what extent a good and just and omnipotent God is responsible for evil in the world. Evil personified, is depicted as having been created by God in the form of the fallen angel Lucifer, who fell from grace as a result of arrogance. Lucifer, or Satan, is God’s opponent or rival in designing the order of salvation. Satan takes advantage of human freedom, tempting an individual to turn away from God and to do evil.

In Islam God is called Allah, but is shared with Judaism and Christianity. Islam believes in the original revelation and covenant between God and Adam to send prophets to all peoples. Mohammed was chosen as Allah’s messenger and the last link in a long line of prophets such as Ebrahim (Abraham), Mosu (Moses) and Issa (Jesus). God is one and has no “son”, but Jesus will come again as a perfected Muslim and rule as the righteous King over a unified world. Everything that happens to humans is predetermined by God. This predestination has made the problem of human free will a controversial issue in Islam. Why does God lead some persons to the correct faith and thus to salvation and lets others perish through their lack of faith? The Day of Judgment is a central element of Islam. Death is a separation of body and soul. Similar to the Apocalypse in Christianity, Allah’s severe judgment is a supreme disaster. Allah then separates the saved from the damned. Both the joys of Paradise and the tortures of Hell are depicted in extreme and sensory terms in the Koran. It is not simply faith that counts, but the practical expression of that faith. There is a super abundance of food and the pleasures of the senses in Paradise.

Hinduism allows its believers much freedom in metaphysical and philosophical questions since it is up to the individual to choose between a theist, pantheist or atheist pathway to the sphere of the deities. Hence there are almost countless deities in Hinduism. The philosophical views set out in the Sutras (scriptures), Sustras (teaching books) and systems of thought known in Sanskrit as darshana, are literally different ways of seeing the truth (drishti) and are merely non-binding guidelines. Other religions are seen as merely alternative pathways to the diversity of deistic spheres designed by Brahma, the architect of the numerous deistic spheres. The principle of Dharma, the law of the world, is that all living things are strictly different from one another and consequently have different tasks, obligations, rights and abilities. Hence among human beings there are different classes (castes) that are strictly separate from one another. The Dharma is the one eternal law for all living things, but it is expressed differently for the different castes and stages of life (ashramas). The diversity and similarity of living things result from the diversity of deeds in a formal life that need to be rewarded or punished. The circular process of dying and being reborn is without beginning or end and continues eternally. But the moral order of retribution for deeds carries within itself the possibility of living things gradually perfecting themselves and ultimately experiencing salvation.

24

Since ancient times, all religions have imagined conditions in Heaven and Hell. Hindus, Muslims, Buddhists, Taoists, Christians and Judaists all maintain theological depictions of Heaven and Hell – and those followers who are devout still believe in it. Atheists do not believe in it and agnostics are uncertain. Until the Renaissance period Christian artists depicted Hell to be like Dante’s inferno: devils and pitchforks, lakes of fire with brimstone clouds and wailing souls. Heaven, in contrast, was visualised as an ideal paradisiacal existence of pleasant meadows and the everlasting happiness of all the saved souls. The Hindu Hell, or Yama Pura, is the oldest known with its subdivisions of heated kettles and spikes. At the end of the torments of the Buddhist Hell, the purged soul returns to Earth as an insect or a reptile, entering the cycle again. From the Muslim Hell, purged souls eventually return to Earth. Judaism introduced the idea that good and bad should not both go to Sheol. The wicked should receive punishment as they deserved – especially if they have prospered from their wickedness on Earth. Conditions in Heaven were less clear-cut because the virtuous, like Job, had been struck with disasters and sores, commensurate with his deficiencies.

The shape of Hell is visualised differently by the various religious faiths. In most cultures it meant a hidden place or hole of fire. Dante described Hell as an inverted funnel of several layers, with each layer deeper and narrower than the last. The Buddhist Hell is similar but in Hinduism it has several mansions, large and small depending on the religious offences. Buddhism also provides different places for each particular sin. For all religions there is a perception of a trial of some nature at the entrance to the Underworld: the damned are separated from the not-so-bad. Then comes the long fall and the fire.

Modern biblical scholars have gone a long way to adjust the perceptions of Heaven and Hell. Jesus himself made no reference to “Hell” or “damnation” in the New Testament. St. Paul said that God would have mercy on everyone. The idea of a topographical Hell gradually faded out of Christian theology. Thomas of Aquinas in Summa Theologiae wrote that the anguish of the damned stemmed from the knowledge that they could never reach happiness. Hell meant ceasing to hope, or hopelessness. Milton, in Paradise Lost, said that: “The mind is its own place, and in it self Can make a Heav’n of Hell, a Hell of Heav’n” (See www.dartmouth.edu/Milton/readingroom/p1/book1)

In modern times perceptions of Heaven and Hell became less literally understood. Jean-Paul Sartre spread the idea that Hell is other people and the things they do. The Vatican also relented and redefined Hell as “a state of exile from the love of God”. For some Christian Fundamentalists in the American South, Hell is still a real place, as real as any specific place on the planet. Behind all the various perceptions of Heaven and Hell lie more fundamental questions. How can fear of Hell and hope in Heaven be reconciled? Can you have Heaven without Hell? Does the experience of pleasure or joy depend on escape from omnifarious pain or horror? Can people be inspired to do better without some threat of the severe consequences of failure?

Religion and Science

For many centuries followers of the various religions tended to consider their own sacred scriptures such as the Bible and the Koran as the primary source of knowledge. In the 17th century a new scientific movement emerged that challenged the Christian view of the world. People started to look at a new way of understanding the world. During the Renaissance the rising power of science forced the Catholic Church to silence the rebellious scientists by burning them at the stake. By the 19th century

25 the Enlightenment had given rise to a new generation of scientists that pushed Christianity into retreat. Scientists like Darwin made discoveries that conflicted with religious doctrine. The scientific evolution placed individual curiosity and new ways of discovery above religious dogma. Science became the biggest challenge Christianity ever had to face. Science makes progress by challenging orthodoxy. Hence it tends to come into conflict with conventional wisdom. Often the orthodoxy challenged, is fundamentally religious.

Aristotle was the first philosopher to argue that the universe is eternal, hence rejecting the idea of a theistic creation. Rational thinkers continued to challenge the religious concept of a creation. In the 5th century St. Augustine confronted the discrepancy between the Biblical cosmology and the findings of natural philosophy by arguing that whenever solid findings seem to contradict a piece of scripture, the contradictory passage should be interpreted figuratively, not literally. Hence Augustine, in theory, would have had no problem reconciling Copernicus or Galileo with the story of creation set out in the book of Genesis. Centuries later, the Dutch-Jewish philosopher, Baruch Spinoza, also made the claim that the best way to find out what the Bible means is to drop the idea that everything it says is the literal truth. He suggested that the Bible should be investigated as if it were any other historical document written by people affected by the outlook of their time and place.

For many centuries the Church dominated the intellectual world until the start of the scientific revolution generated by the Renaissance and the age of Enlightenment. The first major breakthrough did not occur in Rome, but in Fromburg, a remote town on the Baltic coast of Poland in 1553. Copernicus made the discovery that the earth was one of the planets circulating around the sun and not stationery at the centre of the universe. Many others followed the scientific evidence-based method of Copernicus and soon came into conflict with Catholic doctrine.

So began science’s darkest hour. The “Inquisition” was set up to defend the church against heresy. Scholars who speculated about the nature of the world found themselves branded as “heretics”. In 1600 when Bruno was burnt at the stake his death was a devastating blow against science. It started a battle between faith and reason.

Galileo Galilei in 1609 introduced a new telescope at a Jesuit College. But because he supported the “heretical” views of Copernicus, he was prosecuted. By a vote of 7 to 3 he was found guilty. After the 69-year old Galileo was shown the instruments of torture he agreed to confess his error. But Galileo showed that science progresses by experimenting and that by testing ideas, facts can be produced.

While the Inquisition of the Catholic Church was stifling scientific progress in Italy, the England of the 17th century provided a more tolerant seedbed for scientific expansion. William Harvey, having studied in Italy, set up a research institute in England focussing on the human anatomy and physiology. Harvey brought back from Italy a basic understanding of scientific methodology: making observations, measuring results, confirming or rejecting hypotheses and so advancing verifiable solid knowledge. Scientists were given the methodology to challenge the written words – whether those of Aristotle or those written in the Scriptures.

During the 18th century, the scientific movement swept through the Western world. Isaac Newton and John Locke found that the laws of nature were there to be discovered, not only read about in the published word. It was the age of Enlightenment, the age of Reason. Ideas about freedom, democracy and science replaced religion at the heart of society. In 1750 Benjamin Franklin, son of a Puritan,

26 suggested that lightning was just a form of electricity – not the wrath of God. He also played a crucial role in founding the United States of America as a secular state – as one of the founding fathers.

Modern science provided the biggest challenge to Christianity in the form of Darwin’s Theory of Evolution. It suggested that life on Earth evolved through a process of evolution by way of natural selection. Darwin, well aware of the challenge to religion posed by his theory of evolution, was very cautious in releasing it. He delayed publication for almost 20 years after writing it down. Darwin’s theory challenged the Christian belief that man was created in God’s image. Darwin removed the main argument for God’s existence because his explanation removed the need for some kind of divine intervention. The overwhelming evidence advanced by Darwin’s theory led the main Churches to concede that the world was not literally made by God, but they cling to the idea that God made evolution possible. It is based on the idea of a plasticine deity that could accommodate a variety of foundational explanations. This implies that the theology of modern Christianity is now fundamentally different from what it was four hundred years ago.

There are millions of Christians who still believe that the Biblical story of creation is literally true. They are the “fundamentalists”. In the USA, some fundamentalists came up with their own version of evolution, “Creationism”. A museum of creationist evolution was set up in Kentucky in 2007 to provide an exhibition of the evolution of the natural world according to the time-scale set out in the Bible – humans and dinosaurs living at the same time. Where science contradicts faith, in the eyes of the fundamentalists, faith prevails. But today a growing proportion of Christians do not believe in the literal truth of the Bible. Like Augustine and Spinoza they prefer to explore what the Bible has to offer on a broader level: the wisdom of its commandments and the beatification in the teachings of Christ.

For hundreds of years, the Bible was not seen as sacrosanct – verse by verse – by orthodox theologians. The meaning of individual texts had been disputed over many centuries. St. Augustine challenged the books of Genesis and its sequence of events. The Reformation itself was such a dispute. Chapters of the New Testament had been weighed one against the other in interpreting events in the life of Jesus Christ. Over the centuries of European history, more mental effort had been devoted to detailed debates about facets of Christianity than any other topic. There can be no doubt that the chronology of events in the creation myth of the Old Testament does not stand up to scrutiny. But emanent scientists themselves have been woefully mistaken in their own chronology of the evolution of all things in heaven and on earth. Scientists are equally confused about future trends about life on earth. Is it growing colder or warmer and what are the critical determinants? Scientists should rather cling to the Socratic docta ignorantia than the over-confidence of half-trained, dogmatic dillitantes.

Epistemologically speaking it is generally agreed that the scientific method is unable to establish the validity of value judgments. We cannot intersubjectively prove any proposition of faith or belief to be true or correct. It is not denied that individual persons may have intuitional knowledge of ideas that may have plausible validity. But the point is that the truth or validity of such claims cannot be considered scientifically verifiable. Hence, religious references cannot be scientifically verified: the scientific method is unable to present proof for God’s existence. Those who continue to consider God’s existence scientifically verifiable, can do so only by using the term “science” in a broad sense which admits evidence of a type that, however convincing it may appear subjectively, is intersubjectively inconclusive – scientia sive vera sive putative, sed non transmissibilis. Today, theologians tend to refrain from attempts to offer “scientific” proof for God’s reality, focussing attention instead on the inner experiences that cause men to choose God.

27

Another Protestant Christian theorist, John Hallowell, argued that Christianity explains the facts of human nature and existence better than any other theory, and that its fruits, i.e. its consequences, testify to its truth. He argues that Christianity regards man as a rational creature, endowed by his Creator with reason, of being capable of distinguishing good from bad, justice from injustice. Reason, as a supernatural faculty, enables man to distinguish good from evil, to recognise evil in the world as the perversion of human will. Human freedom is rational choice. The truthfulness of Christianity lies in its correspondence to reality and derives from the inadequacy of all rival explanations of life. It enables us to live in the present without either complacent optimism or helpless despair.

Although these thoughts may have played a role in converting men and women to religion, there is still a gulf separating these claims from an intersubjectively conclusive scientific proof. Religious belief always leaves room for scientific doubt, though not for scientific refutation. Roman Catholic thinking has never abandoned the claim that God’s existence can be scientifically verified. Pope Pius XII’s address to the Pontifical Academy of Sciences of November 22nd, 1951, specifically maintained the claim despite advances, fully accepted by him, of astronomy and nuclear physics. Extended excerpts of his address were published by the New York Times, November 23rd, 1951, p.6. Many Catholic philosophers of the 20th century, including Jacques Maritain, continued to base their scientific teaching on God’s reality. Over the past 400 years it was found that the evidence of science often contradicts the Bible. Since the 18th century it was scientific reasoning that provided the driving force of civilised life. Will further scientific development make religion redundant? No one knows with certainty.

In recent years billions have been spent by the European particles-physics laboratory (CERN) at Geneva. The first task of CERN’s new machine, the Large Hadron Collider, is to search for the Higgs boson – an object that has been dubbed with a certain amount of hyperbole, the “God” particle. Exactly what scientific contribution will flow from this huge investment is not clear.

In 2008 a further multi-million scientific study combining scholars from 14 universities was launched with the object of “explaining religion”. A range of disciplines from psychology to economics are involved. This ambitious attempt will last several years and will look at the mental mechanisms involved in sustaining belief systems, how religious beliefs may influence character development and collective benefits. It includes neurochemical research to find out how religious activity is spread across different parts of the brain and how the brain generates and processes religious experiences. Others focus on the links between religion and altruistic behaviour, collaborative activities, family planning, avoidance of smoking and drinking, healthier lifestyles and work ethic. Evolutionary biologists tend to be atheists. If the propensity to religious behaviour is an evolved trait, then atheists are not likely to benefit from its potentially beneficial effects!

Atheism and Agnosticism

Both terms are derivatives of “theology” which comes from a Greek combination of the words theos, meaning god, and logos, meaning reason. Theology then is meant to explain a theistic worldview. Adding the prefix “a” forms another word for its contrary: a-theos or “not godly”. Just as “atheism” is the contrary of “theism”, theology has a contrary in “atheology”. Atheology is the intellectual effort to explain why a worldview should not include a god – it sceptically denies God’s existence – or anything divine or supernatural. (See J.R. Shook, The God Debates, Wiley Blackwell, 2010, p.13)

28

Pollsters around the world find that few non-believers prefer to label themselves as “atheists”. This reluctance probably has to do with the negative connotations attached to “atheism” as a dogma. As a result the term “agnostic” was proposed in the 1860s by Thomas Henry Huxley as the contrary of “gnostic” – a Greek term for knowledge. Hence the term “agnostic” denotes a lack of knowledge about any ultimate reality such as a “supreme being”. Huxley offered agnosticism as a reasonable stance towards the overconfident dogmatic certainty of a religion or any overreaching conclusions of any other philosophy. The agnostic is sceptical towards both theology and metaphysics.

Agnostics and atheists are sometimes confused because both camps are similarly sceptical about supernaturalism. But despite the obvious overlap between agnosticism and atheism, there are important differences in their philosophical positions. The “atheist” clearly professes his/her disbelief in God and denies that God exists. The agnostic, in contrast, does not support such a dogmatic denial on the grounds of being an ignorant sceptic about the divine. Agnosticism has emerged as a non-belief alternative to atheism’s dogmas and religion’s faith.

Richard Dawkins has gained international notoriety as the “archbishop of atheism”. His major tome The God Delusion, is an irreverent book, accusing Jesus of having “dodgy family ties” and describing the God of the Old Testament “as arguably the most unpleasant character in all fiction: jealous and proud of it, a petty, unjust, unforgiving control-freak; a vindictive, bloodthirsty ethnic cleanser; a misogynistic, homophobic, racist; infanticidal, genocidal, filicidal, pestilential, megalomaniacal, sodomistic, capriciously malevolent bully”. (The God Delusion, Bantam Press, London, 2006, p.31)

Dawkins is an evolutionary biologist and a popular communicator about science. He maintains that religious moderates make the world safe for fundamentalists and Jihadists by promoting faith as a virtue and by furthering an overly pious respect for religion. He believes any positive aspects of religion can be replaced by equally beneficial non-religious substitutes.

Dawkins examines the question why religion is so widespread. It is found in all cultures despite the fact that worshipping deities is such an “irrational and wasteful habit”. Dawkins concludes that religion is a by-product of mental abilities that evolved for other purposes – such as the way children are “programmed” to believe anything their parents tell them, which is quite useful in the light of all the useful information parents can share. But according to Dawkins, this transmission is vulnerable to becoming a conduit for worthless information that is passed on for no other reason than tradition.

Dawkins argues that the special appeal of religious ideas is based on its special compatibility with human psychology. Religion has at one time or another been thought to fill four main roles in human life: explanation, exhortation, consolation and inspiration. These are the areas, Dawkins argues, that should be the targets of attack by logical firepower.

As for exhortation, Dawkins argues that religion is not a legitimate source of morality. But Dawkins is less clear on what he considers as a proper source of morality. He suggests as source a combination of genetic instincts which evolved because morals allowed humans to benefit more efficiently from co- operation and a cultural Zeitgeist. Dawkins concedes that for some people consolation and inspiration are genuine benefits of religion. But these functions, he believes, can and should be fulfilled by other means. Dawkins argues that contemplation of the natural world can do the job as illustrated by the perspective-altering discoveries of modern physics. But how many people can find consolation in quantum physics?

29

Dawkins proposes two strategies to expunge religion. First he wants to subvert the mode of transmission between parent and child. He considers religious upbringing as a form of indoctrination that he equates to child abuse. Second, he wants to energise atheists to become less stigmatised and more electable to public office.

Religiosity, Atheism and Secularism

As explained by John R. Shook, op.cit. pp.1-2, “Religion promises a rewarding relationship with the supreme reality. Religions offer views about what supreme reality is like, how best to relate to it, and why believers benefit from that relationship. Non-believers don’t deny that reality is impressive, but they doubt that any religion knows best about reality or how to relate to it. Non-believers instead use some non-religious world view, some account of reality and humanity’s relationship with it, that lacks any role for a god ... Respectful and rational dialogue among believers and non-believers, and everyone in between, holds great promise ... (and) could hardly be a waste of time.”

Atheism (and agnosticism) is associated with an optimistic world view expecting reason and science to explain everything and make life better for people everywhere. They argue that the lack of religious belief does not necessarily cause moral and social deterioration since most of the advanced, healthy and peaceful countries in the world are amongst the least religious. But what are the sources of their civility? The truth is that most people around the world still harbour some belief in deity and argue that there will always be wicked deviants in any society.

Atheists often get blamed for secularisation, yet, the process of secularisation was well under way in the West long before atheists were strong enough to achieve the separation of church and state. Secularisation is not the same as atheism. It has to do with religion’s control over society’s institutions and events. Secularisation has been involved in the removal of direct religious control over major political and social institutions. It prevents governments from favouring a specific religion and it also protects religions from government interference.

Is supreme reality a deity, or not? Having an answer to that question would reveal the true nature of religiosity and what its role could be. Bibliography

Ferguson, N. (2011) Civilization – The West and the Rest, Allen Lane, London Goodall, H. (2001) Big Bangs – The Story of Five Discoveries that Changed Musical History, Vintage, London Hattstein, M. (1998) World Religions, Krönemann, Cologne Landes, D. (1998) The Wealth and Poverty of Nations, Little Brown & Co. London Olson, M. (1982) The Rise and Decline of Nations, Yale University Press, New Haven Shook, J.R. (2010) The God Debates – A 21st Century Guide for Atheists and Believers, Wiley-Blackwell, Chichester Strickland, C. (1992) The Annotated Mona Lisa – Art History from Prehistoric to Post-Modern, Andrews & McMeel, Kansas City Wright, R. (2009) The Evolution of God – The Origins of Our Beliefs, Little Brown, London Zaehner, R.C. (ed.) (1998) Encyclopedia of the World’s Religions, Barnes & Noble, N.Y.

30

3. The Footprints of Abrahamic Religions (February 2013)

At the present time, there is no empirical research data available to analyse the role of religion on societies on a comparative and longitudinal basis. The only information available is based on case studies of the historical developments of particular religions in various parts of the world.

Judaism and Jewry

Judaism is the source of the two largest religions in the world: Christendom and Islam. But where these religions are universalistic and missionary in orientation, i.e. focussed on extending its reach to all communities, Judaism is essentially particularistic and ethnically focussed exclusively on Jewish people. Christianity shares with Judaism its monotheistic roots in the Old Testament but it has extended its universalistic message to the teachings of Christ as set out in the New Testament of the Bible. Islam also shares with Judaism its monotheism and also its Abrahamic history and several precepts and practices but they direct their missionary focus to all communities who are prepared to accept the teachings of Muhammed about Allah as set out in their sacred text, which is the Koran.

The Old Testament’s book of Genesis begins with an account of the Jewish creation myth. It begins with a universalistic description of the beginnings of man and everything that is in Heaven and on Earth. But because man is created in the “image of God”, he also carries the divine spirit in him. This sets him apart, over and above the rest of creation. But the remainder of the Old Testament is mainly focussed on the history of the Israelites which it singles out as a “kingdom of priests and a holy nation”.

Judaism has a complex relationship with the Jewry. Judaism cannot survive without Jews because only Jews or persons converted to Jewry can become Judaists. Judaism has for centuries been handed down by many generations of Rabbis. Throughout, the Rabbis have acted as the interpreters, articulators and guardians of the Judaic Torah. They decided who qualified to be a Jew and what the text of the Bible meant. This has given Judaism an exclusiveness which is inevitably frowned upon by other communities.

After the middle of the second century AD, the diaspora of the Jews took them to many countries. Everywhere they distinguished themselves as an isolated, closed community of assiduous traders with wide-spun family and religious networks. As traders they became stockpilers, hoarders or accumulators of money. They developed a reputation as money lenders which raised the question of usury. Although the practice of charging an interest on loans was common in parts of Mesopotamia and among Phoenicians and Egyptians, Jews were not allowed to charge interest to other Jews. Deuteronomy 23:24 clearly stated “Unto a stranger thou mayest lend upon usury, but unto thy brother thou shalt not lend upon usury”. By being permitted to charge interest to strangers but not to Jews, the charging of interest was made synonymous with something hostile. The charging of interest became calamitous for Jews in their relations with the rest of the world: they were disliked and mistrusted. As a result of their concentration on money lending to make a living, the Jews became an element in a vicious circle of money lending and being disliked.

The traditional involvement of Jews in a variety of money-lending practices, has given rise to giving Jewry a bad name. “Being a Jew” in a proverbial sense has acquired the meaning of being a reclusive exploiter of other people. Since the expulsion of Jews from Spain, many Jews took their ticket to emancipation or access to society through baptism as Christians. A famous example is the baptism into

31 the Anglican Church of Benjamin Disraeli in 1817 as a result of a quarrel between his father and the local synagogue. Jews were not legally admitted to parliament until 1858, and without his baptism Disraeli could never have become Prime Minister. Karl Heinrich Marx was baptised as a 6-year old in Trier. His grandfather was a rabbi, but his father, Heinrich, was a child of the Enlightenment and a student of Voltaire and Rousseau. But he was also an ambitious lawyer and became a Christian, and in due course, rose to be dean of the Trier bar. His son, Karl, instead of going to the yeshiva attended Trier high school. During much of the first half of the 19th century Jews could not own land or exercise a trade or profession throughout . It was only after the Prussian Reform Treaty of 1847 and 1848 that civil rights on a non-religious base were established in the German states.

Karl Marx himself, paradoxically, reflected the anti-Jewish sentiments of his time. Like the French utopian socialists Fourier and Proudhon who considered Jews as the “incarnation of commerce”, the “source of all evil”, an “unsociable race, obstinate, infernal ... the enemy of mankind”, “a network of commercial conspirators against humanity”, Marx described Jews in utterly pejorative terms. In two essays on “The Jewish Question” which he published in the Deutsch-Francöisische Jahrbucher in 1844, his terminology reflected typical anti-Semitic clichés: “the dirtiest of all races”, “leprous people”, “the Jewish money-men who never soil their hands with toil, exploit the poor workers and peasants”, the “disease of the Jews is the religion of money, and its modern form is capitalism”. Karl Marx expanded anti-Semitism from a conspiracy-theory based on a parasitical race into an anti-capitalist theory of class conflict.

Wherever Jews settled as communities, their money lending led to trouble with the locals. The very fact of their constant displacement and resettlement probably had an invigorating effect on their cultural way of doing things and probably sharpened their business skills. They generally added a dynamic input to the areas where they settled. They became “expert settlers” as a result of having been forced to move throughout their history. As strangers and sojourners from their earliest origins, they had, over many generations, and in a variety of circumstances, perfected the skills of concentrating their wealth so that it could be switched quickly from a point of danger to a safer destination. Their outsider status is well demonstrated in their attitude to dealing with money – a double standard for money dealings with Jews and Gentiles. This prepared Jews to take advantage of economic opportunities wherever they settled. They kept on pushing the diaspora further in search of new business opportunities.

They have been active over large parts of Asia from early medieval times trading in silks and spices they bought from the East and slaves they brought from the West. They served as bankers in Muslim courts, taking deposits from Jewish traders, while on-lending it to the Muslim caliphs. They thrived in Baghdad, Tunisia and Spain. Expelled Jews went to the Americas where they set up factories and became plantation owners. They were particularly active in Brazil where they controlled the trade in precious and semi-precious stones.

The Jews have always been skilful at using and transferring capital. But since they were established in Anglo-Saxon society, the security they enjoyed in law enabled them to accumulate assets. Trading, especially in articles of small volume and high value, such as jewels, easily concealed and whisked from place to place, no longer constituted the sole economic activity in which Jews could engage safely. In New York and elsewhere in America, the Jews soon moved to control the centre of the financial stage. In England the Jews became the founding element in the financial market of the City of London.

32

As argued by Paul Johnson, the Jews made a contribution to the development of modern capitalism quite disproportionate to their numbers. They invented several financial instruments: bearer-bonds, bills of exchange, banknotes and a variety of financial securities. They dominated the Amsterdam stock exchange, held large portfolios in the Dutch VOC and both the British West and East India Companies. They were the first professional stock jobbers and brokers in England. In 1792 they took the lead in creating the New York Stock Exchange. As a people without a country, the world was their home: the further the market stretched, the greater were the opportunities. Jews built the first textile mills in India and acquired control of the first diamond and gold mines in South Africa. The Jewish Randlords in conjunction with Cecil John Rhodes and Lord Milner succeeded in pushing the British Colonial Office into war with the two Boer Republics. Afterwards the trading of gold and diamonds became the mainstay of the City of London trading accounts in the first decades of the 20th century.

In modern times the centre of the world’s financial markets shifted to the Jewish bankers in Wall Street, New York. Then in 2008 these bankers and brokers dumped their toxic financial instruments (all kinds of derivatives) onto other financial markets and so precipitated the World Financial Crisis when it brought the fragile European economies into turmoil. These economies, in turn, have been brought to the fringe of insolvency by their practice of chronic deficit spending financed by the unfathomable bond market – also a creature of Jewish financial acumen.

In the aftermath of the Global Financial Crisis, Washington demanded no scalps from Wall Street. No directors were brought to book. When the US Congress introduced what actually became the “Wall Street Reform and Consumer Protection Act”, the financial interests of Wall Street evidently resolved that reforming Wall Street was too delicate a task to be left to Capitol Hill. The Obama Administration left it to the Secretary of the Treasury, Tim Geithner (a former New York Federal Reserve President and Wall Street confidante) to design a mild overhaul of Wall Street regulation. After numerous consultations with interested parties such as Blankfein of Goldman Sachs, the voluminous Dodd-Frank Act was chaperoned through Congress by Wall Street lobbyists (the largest contributors to the President’s campaign fund). The new system enshrined the “too big to fail” principle without effectively creating a straight-jacket for the banks. It created a mechanism for seizing and winding down big failing firms and reinforces capital and liquidity buffers throughout the financial system, hoping thereby to reduce the danger of contagion caused by failing institutions. The changes effectively reshuffled the status quo and added additional regulatory bureaus to Washington’s already “balkanised” regulatory arrangement which ensured that the regulated would continue to have opportunities to play off one regulator against the other. The “moral hazard” problem remains as large as ever.

During the bill-writing process, interested parties such as Blankfein appeared before Congressional Committees. The Sunday Times of November 8th, 2009, reported Blankfein saying in an interview with John Arlidge, that Wall Street was doing “God’s Work”. No one was reported to be commenting on God’s need to employ a good auditor to look into Wall Street’s shenanigans and possibly also a good prosecutor to seek out the culprits – the scapegoat Bernie Madoff aside. Subsequently, in 2012 it was reported in The Economist that the board of Goldman Sachs had awarded Blankfein the largest remuneration package ever received by a bank chief executive. God evidently rewards his “workers” very generously.

The finance and banking cronies of Wall Street and the City of London are today sitting at the cross roads of the world’s financial networks, making a living out of speculative investments in a range of highly complex national and international financial instruments. These financiers operate on a global

33 scale while the regulatory instruments, which the Jews also tend to dominate, only have a national footprint. They cannot function as an effective monitor of the world’s shadow finance and banking system driven by secretive hedge funds and investment trusts based in tax havens and obscure financial centres.

Economic historian Niall Ferguson’s remarkable study, The Ascent of Money – A Financial History of the World, Allen Lane, New York, 2008, p.2, makes the following statement: “Throughout the history of Western civilization, there has been a recurrent hostility to finance and financiers, rooted in the idea that those who make a living from lending money are somehow parasitical on the ‘real’ economic activities of agriculture and manufacturing. This hostility has three causes. It is partly because debtors have tended to outnumber creditors and the former have seldom felt very well disposed towards the latter. It is partly because financial crises and scandals occur frequently enough to make finance appear to be a cause of poverty rather than prosperity, volatility rather than stability. And it is partly because, for centuries, financial services in countries all over the world were disproportionately provided by members of ethnic minorities, who had been excluded from land ownership or public office but enjoyed success in finance because of their own tight-knit networks of kinship and rent.”

Addressing the summit of the Islamic Conference on October 16th, 2003, Dr. Mahathir Mohammed, Malaysia’s Prime Minister for 22 years, stated: “The Europeans killed 6 million Jews out of 12 million, but today the Jews rule the world by proxy. They get others to fight and die for them”. When his remarks drew outrage from Jewish groups around the world and from Western governments, Dr. Mahathir responded by saying the reaction to his comments proved his point, for it showed the Western media was controlled by Jews.

The important point is not merely the immensity of wealth (and power) accumulation, but the obscure and disproportionate concentration of power and influence in a few hands. It is not clear at what stage the concentration of power becomes manipulative and excessive. Lord Acton said “all power corrupts and absolute power corrupts absolutely”. Certainly the Jewish people do not wield absolute power in today’s world but they do have a lot of influence and soft power: money buys results. Some people may even wield enough manipulative influence and power to make self-fulfilling predictions – especially if they believe that they are “God’s chosen people”.

In 2012 Edward Luce, an Oxford trained British journalist and Financial Times correspondent, published an interesting book entitled Time To Start Thinking – America and the Spectre of Decline, Little Brown, London, in which he paints a disturbing picture of the state of American society. According to Luce the American elites fail to come to grips with the real problems facing the country. He points his fingers in accusation at the Republican Party and “Tea Party” leaders of the conservative right rather than to the socialist-liberal left “... because there is no such thing as a liberal ‘movement’; angry liberals are not as unified as angry conservatives ... and they are not nearly as numerous”. (Luce, op.cit. p.268)

Luce misses the point. Most of his contacts and sources of information and opinions are Jewish American: Wall Street financiers, Washington lobbyists, leftist academics or lawyers, teachers or journalists. Luce is not a reliable guide into the intricate policy-making structures of the American society. To understand the functioning of the complex American political-economic society, proper regard must be had to the demographic determinants of party and voter allegiances and particularly to the predominant role of the well-financed lobbyists on all levels of government where money buys

34 results. There can be no doubt in whose hands the power strings of money are held. In years to come the American political scene is likely to be dominated by the contest between the political fronts of the major religious camps: the Jewish network, the “Bible Belt” campaigners, the insurgent Islamic jihadists and the efforts of these groups to win over the support of the emerging Latino contingency.

The person who was asked to write the chapter on Judaism in the Encyclopedia of the World’s Religions, R.J. Zwi Werblowsky, concluded his survey with the following words: “Perhaps we do not go far wrong in suggesting that Judaism and Israel can at least be partly understood as a continuing historical process which is the result of and the response to God’s charge to his servant Moses: ‘... for all the earth is mine. And ye shall be unto me a kingdom of priests and a holy nation. These are the words which thou shalt speak unto the children of Israel’. To which the Rabbis added the laconic comment: These are the words – no more and no less.” (See Werblowsky, op.cit. p.39)

The Jewish creation myth and Judaism’s particularistic and exclusively ethnic focus on Jews and their total reliance on the text of the Old Testament, have given rise to many questions. Has God chosen the Jews as the focus of his purpose with mankind? How certain is that claim? Since the Jews wrote the Bible, how can we find independent confirmation of the validity of that claim? What part of the Bible, the Talmud and Rabbinical Law is essentially a human creation? Was the Pentateuch verbally dictated to Moses? Is each and every individual Jew “chosen” and different from other citizens of the countries where he or she lives?

Paul Johnson ends his remarkable History of the Jews with the following words: “Historians should beware of seeking providential patterns in events. They are too easily found, for we are credulous creatures, born to believe, and equipped with powerful imaginations which readily produce and rearrange data to suit any transcendental scheme. Yet excessive scepticism can produce as serious a distortion as credulity. The historian should take into account all forms of evidence, including those which are or appear to be metaphysical. If the early Jews were able to survey, with us, the history of their progeny, they would find nothing surprising in it. They always knew that Jewish society was appointed to be a pilot-project for the entire human race. That Jewish dilemmas, dramas and catastrophes should be exemplary, larger than life, would only seem natural to them. That Jews should over the millennia attract such unparalleled, indeed inexplicable, hatred would be regrettable but only to be expected. Above all, that the Jews should still survive, when all those ancient people were transmuted or vanished into the oubliettes of history, was wholly predictable. How could it be otherwise? Providence decreed it and the Jews obeyed. The historian may say: there is no such thing as providence. Possibly not. But human confidence in such an historical dynamic, if it is strong and tenacious enough, is a force in itself, which pushes on the hinge of events and moves them. The Jews believed they were a special people with such unanimity and passion, and over so a long a span, that they became one. They did indeed have a role because they wrote it for themselves. Therein, perhaps, lies the key to their story.” (Paul Johnson, op.cit. pp.586-587)

Christianity and the Rise and Decline of Religiosity

Historian Geoffrey Blainey states that few invasions of ideas have matched the global spread of Christianity during the period 1780 to 1914. During this period it became the largest religion in the world. Catholics were initially more successful than Protestants but the latter gradually gained ground. Two Catholic countries, Spain and Portugal, initially took the lead but Britain, Germany and the

35

Netherlands, later joined by the United States, assisted the Protestants to catch up. Ever since the time of Emperor Constantine, political support was vital for Christianity’s expansion into new territories. Several dozen different churches, sects and religious orders joined in the quest to convert Africans and Asians. All the missionaries confronted the problem that they were often seen as accomplices of the ruling European powers. Christians who were willing to adapt to the new lands and peoples were more likely to succeed.

By 1900 Christian missionaries had reached almost every part of the inhabited world except remote parts of Africa and New Guinea. But they seldom succeeded in converting more than 5 percent of the local populations. The Philippines remained the most Christianised country in East Asia – a tribute to Spanish priests and monks and nuns of earlier centuries. Doctors, nurses, linguists and teachers came with the missionaries and sometimes a printing press to issue the newly translated New Testament in the local language. These efforts were financed largely by small coin contributions by European and American citizens.

In the industrial age new professions were open to the ambitious. More and more women were knocking at the door of church preaching as a career. But the doors remained closed until well into the 20th century. The first pioneer women preachers in the 19th century had to preach in the open air, because most church buildings were not open to them.

The crusade against slavery in the USA was initially a white male crusade, but gradually women also took leading roles in the American Anti-slavery Society. The abolition of slavery in the USA in 1865 owed much to the American Civil War and to the campaign led by Christian men and women. The rising campaign to reduce the consumption of alcohol relied heavily on female orators, writers and petitioners who came mainly from Baptist, Methodist, Presbyterian, Congregational, Disciples of Christ, Salvation Army and other denominations. The Women’s Christian Temperance Union was a major forerunner of modern political feminism because it demanded the right to vote. The women proved powerful campaigners.

Since the start of the 20th century, Christianity was seriously challenged by two powerful ideologies: communism and nationalism. Communism was an offshoot of socialism which, in turn, was originally derived from Christian principles, particularly assisting the less privileged or disadvantaged members of society through charitable action. Socialists elevated charitable action to collective action through the instruments of the state. The communists added to the collective action of the state the mechanism of the totalitarian party dictatorship. Nationalism is derived from extremist patriotic sentiments. Both fascist Italy and elevated patriotic sentiments into a powerful nationalistic ideology where the alleged interests of the nation override all other considerations.

Following the theories of Karl Marx, communism insisted that religion was the “opium of the masses”. It made people feel content, materially, with what they had. Christianity fixed their hopes on an afterlife, whereas communism would create a paradise on earth in which poverty was unknown. Communism became a secular religion luring millions of people away from Christianity. They set up their own version of the Inquisition with extinction chambers and prison camps in Siberia. The Russian Orthodox Church was suppressed first by Lenin and later by Stalin. In 1918 all theological seminaries were shut and afterwards all churches were closed or used as storehouses. A wave of Soviet propaganda denounced Christianity. During the Second World War, Stalin gave the Church a brief reprieve. After the war, the Soviet vendetta against Christianity was resumed and relayed to the newly occupied countries of Eastern Europe. In the whole history of Christianity few setbacks were as

36 serious as the decline of the Orthodox Church in Russia in the first half of the 20th century. The Orthodox Church was the heir of the longest strand in Christian history: the eastern Orthodox brand of Christianity was the custodian of the birthplaces of Christianity. Its sacred language – classical Greek – was the language of the authors of the New Testament. The influence of Rome came into effect centuries later during the time of Constantine. After 1918, the Russian Church, the strongest of the Orthodox Churches, was suppressed by Russia’s atheists.

Many German Christians were alarmed by the spread of atheism and the suppression of Christianity in Russia. So when Hitler’s National Socialist Party came into power in 1932, many Christians saw in Hitler a bulwark against the communist threat. But Hitler saw Christianity only as a temporary ally. In his opinion one is either a Christian or a German. He elevated his brand of nationalism – Nazism – into a religion. He ceased to open the parliament after it was, fortuitously for him, burnt down in 1934. He broke his pact with the Catholic Church and divided the Lutherans by setting up his own brand of “German Christians”. A host of Lutheran pastors rebelled against Hitler and hundreds went to prison from where they never returned. A notable Lutheran opponent was Reverend Martin Niemöller, who spent seven years in Dachau.

In a mere five years Nazism replaced Christianity as the dominant creed. The cry “Heil Hitler” was the rallying call. The churches were allowed to continue in a subdued role. In 1939 Hitler invaded Poland and after a brief Blitzkrieg soon subdued all the West-European countries, including France. Only the stubborn air defences of Britain and Churchill’s inspiring leadership saved the UK from invasion. In June 1941 Hitler invaded the Soviet Union and like Napoleon’s 1812 campaign, was defeated by the long distances and cold winters of the Russian steppe.

During World War II the Nazis perpetrated the most ghastly genocide of Jews and Gypsies imaginable by killing people on an industrial scale. It embraced all Jews, old and young for the crime of being Jews. Hitler massacred not only Jews from Germany but also from Poland, Austria, France, the Netherlands and other East European countries. The massacre of around 6 million Jews seriously reduced the talent pool of the Western world in music, art, literature, science, physics, law and finance. Some people tend to point accusing fingers at Christians for not doing more to stop Hitler. But that was easier said than done. Millions of Christians from Western countries fought and perished in the War to defeat Nazi Germany and its ally, Japan.

The Second World War stands out as a decade of unprecedented destruction. Human lives were destroyed at a rate of tens of millions to a total of around 50 million. Most of this destruction was facilitated by modern science in the hands of atheists. Both World Wars of the 20th century were devastating in the extreme because science and technology had been enlisted to support war efforts as never before. Two anti-Christian ideologies – Communism and Nazism – both placing a low premium on human lives, were in command of the destructive use of science and technology. Ironically, the deadliest part of World War II was when the two secular creeds confronted one another.

Since the Age of Enlightenment all religions were seriously challenged by the advancement of science as a method of inquiry. Scientists rely on empirical evidence and factual knowledge that can be transferred intersubjectively to determine its objective validity. Beliefs and faiths related to supra- natural forces were viewed as mere superstitions. Although these doubts were applicable to all religions, the debate between scientists and believers mainly took place in the Western world. The parts of the world where freedom of speech, conscience and assembly were constitutionally guaranteed were much more amenable to debates about religion. In the Muslim world atheism is not

37 tolerated and treated as a punishable crime. Even deviant religious beliefs are not tolerated. The Western world with its high level of religious toleration in modern times was also the main arena of scientific progress.

In the Western world it is widely assumed that many scientists nurse secret religious doubts, despite the fact that as many probably believe in God. Charles Darwin and Gregor Mendel both probed the tantalising question of the origins and evolution of species – old and new. Most atheists and agnostics are less vocal than Dawkins about their religious beliefs.

As stated by Geoffrey Blainey, “Several learned observers concluded that midway between Christianity and atheism lay a wide strip of vacant ground. Neither side could capture it. Neither side could demolish its rival, intellectually. Christians, relying more on faith, intuition, imagination and a sense of wonder and mystery, could usually prevail in debates on their home ground – religion. Scientists, with their insistence on evidence and measurement, and their search for general theories and for certainties and predictability, usually prevailed on their home ground ... As for the deep question – is there a God? – Christian intellectuals could not prove the existence of God, and scientists could not disprove it.” (See Geoffrey Blainey – Christianity – A short History, op.cit. p.448)

What was often challenged on the side of the doubters, was the belief that Jesus was the Son of God, that he was conceived in the Virgin birth, that he had risen from the dead and that one day he would return to this earth to judge the living and the dead.

The permanent nature of this deadlock was succinctly expressed in 1959 by the French Jesuit, Teilhard de Chardin in his posthumous book The Phenomenon of Man. He said that science and religion are two sides of the same phenomenon, a quest for perfect knowledge, and that each side was vital. Yet each side often insisted that there was only one truth and that it could be seen perfectly clearly in the mirror that it had selected.

Scientists and believers often stand at two opposite ends of the spectrum of future expectations. Many scientists and secularists are optimists about human nature, about human progress and the material benefits applied science can bring. People like Stephen Hocking expect that in the fullness of time science will enable humans to fulfil its enormous potential – if only we succeed in adequately employing our reasoning powers. At the other end of the spectrum stand religious believers such as Christians who assume that evil, like goodness, is part of human nature,. Christians, in particular, combine pessimism and optimism in their belief system and consider it essential to introduce an element of humility in our expectations. The development of scientific rationality is not, per definition, a linear road to perfectibility. The ultimate truth cannot be established qua science alone. “Ought” questions can only be resolved in the realm of values.

In the Christian world, for close to 2000 years, people have believed that those who gravely and frequently offended the precepts of the Bible would, after death, suffer eternal torment in hell. This terrible fate was depicted in coloured glass in cathedrals and churches, in paintings, stone carvings, novels and poems. By 1900 millions of Protestants ceased to believe in perpetual punishment and in the existence of hell.

Heaven was not so easily discarded. It was a more consoling belief as a place where deceased loved ones would reunite. The persisting belief in heaven was further enhanced by the expectation that sins

38 could be forgiven when genuinely repented. So sinners could also find a place in heaven. For Calvinists God’s judgement was supreme and should not be challenged.

For people of the Western world the 19th century was looking more promising” famine and disease were better controlled. Housing and hygiene improved and the standard of living was higher than ever before. People along both sides of the Atlantic were better educated, worked fewer hours, were healthier and more secure and better protected from foreign aggression. Fewer women died in childbirth, fewer children died in infancy and life expectancy increased. God’s wrath and blessings became more abstract and less visible.

The practice of church worshipping varied considerably in the West. Generally people of middling wealth were more likely than tradesmen or the poor to attend church and women more likely than men. The Irish and the Portuguese were more regular than the French or the English, the Poles more than the Swedes. Churchgoing was also less frequent in the city than the countryside, where it was also a social occasion. North America has always been a land of churchgoers. A good organist, a well- trained choir or good soloists, enhanced the standard of the music.

In the Western world, the 1960s started an era of rebellion against and rejection of traditions and taboos in religion, politics, sex, music, clothes and much else. Since the French Revolution, the Christian world has never seen such revolt against long-held precepts and values. The spirit of the time encouraged John Lennon of Beatles fame to declare “Christianity will go” and “It will vanish and shrink” and “We’re more popular than Jesus now”. But Lennon was partly correct in one respect: in the West, or Europe in particular, the Christian church was in decline. Christianity is still in decline in the most prosperous, most literate and most materialist nations. Is the decline in Europe, the traditional Christian heartland a portent of its long-term future?

Reliable survey data on religiosity – particularly in comparative and longitudinal perspective – is difficult to obtain. Recent statistics are summarised by Niall Ferguson in Civilization – The West and the Rest, pp.266-277, in the following terms: Europeans pray less and work less than Americans. According to the 2005-8 World Values Survey, 4 percent of Norwegians and Swedes and 8 percent of Germans and French attend church services at least once a week, compared with 36 percent of Americans, 44 percent of Indians, 48 percent of Brazilians and 78 percent of sub-Saharan Africans. The figures are significantly higher for a number of predominantly Catholic countries like Italy (32 percent) and Spain (16 percent). The only countries where religious observance is lower than in Protestant Europe are Russia and Japan. God is considered to be important in Latin America, sub-Saharan Africa and highest of all in Muslim countries of the Middle East. Only in China is God important to fewer people (less than 5 percent) than in Europe.

The case of Britain is especially interesting in view of the determination with which Britons sought to spread their own religious faith in the 19th century. Around 17 percent of Britons claim that they attend religious services at least once a week – higher than continental Europe, but less than half the American figure. Fewer than 25 percent of Britons say God is very important in their lives – less than half of the American figure. The surveys do not distinguish between religions, so that they almost certainly understate the decline of British Christianity. More Muslims attend a mosque than Anglicans go to church. The Evangelical and Pentecostal churches are better attended than the Anglican Church. Prior to 1960 most marriages in England and Wales were solemnised in a church. After that a downward slide began to around 40 percent in the late 1990s. The Church of Scotland shows a similar trend.

39

These trends seem certain to continue. Practising Christians are ageing. According to a 2000 “Soul of Britain” survey, younger Britons are markedly less likely to believe in God or heaven. Less than 8 percent identified themselves as “atheists”; 12 percent indicated they did not know what to believe; 32 percent considered all religions as equally valid; more than 66 percent said they recognised no clearly defined moral guidelines; and, bizarrely, 45 percent of those surveyed said that the decline in religion had made the country a worse place. So much for opinion surveys!

Why have Westerners lost their Christian faith? Some seek the answer in the secular philosophies of the Sixties, the Beatles, the contraceptive pill, the mini-skirt, pop culture and the like. Many Europeans attribute the change to the realisation that religious faith is just an anachronism, a vestige of medieval superstition and roll their eyes at the religious zeal of the American Bible Belt. They do not consider their own lack of faith as an anomaly.

“So who killed Christianity in Europe?”, asks Niall Ferguson. Max Weber, the famous German social scientist, predicted that the spirit of capitalism was bound to destroy its Protestant ethic parent, as materialism corrupted the original asceticism of the godly. Leo Tolstoy also saw a fundamental contradiction between Christ’s teachings and those habitual conditions of life which we consider as civilisation, culture, art and science. If so, asks Ferguson, “what part of economic development was specifically hostile to religious belief? Was it the changing role of women, the decline of the nuclear family, and the demographic decline of the West? Was it scientific knowledge which caused the ‘demystification of the world’? Was it Darwin’s theory of evolution which overthrew the biblical story of divine creation? Was it improving life expectancy which made the hereafter a much less alarmingly proximate destination? Was it the welfare state, a secular shepherd keeping watch over us from cradle to the grave? Or could it be that European Christianity was killed by the chronic self-obsession of modern culture? Was the murderer of Europe’s Protestant work ethic none other than Sigmund Freud?” (Ferguson, op.cit. p.270)

Freud, the Moravian born Jewish founding father of psychoanalysis, set out to refute Max Weber. For Freud religion could not be the driving force behind the achievements of Western civilisation because it was essentially an “illusion”, a “universal neurosis” devised to prevent people from giving way to their basic instincts – in particular, their sexual desires and violent, destructive impulses.

Freud’s theories about the death of Protestantism did nothing to explain America’s continued Christian faith. Americans have become richer, their knowledge of science has increased. They have been more exposed to psychoanalysis and pornography than Europeans. Millions of worshippers flock to American churches every Sunday. The West has always maintained a strict separation between religion and state, allowing an open competition between multiple Protestant sects. The competition between sects in a free religious market seems to encourage innovations to make the experience of worship and church membership more fulfilling. But are these American sects flourishing because they have developed a kind of consumer Christianity? It is easy to drive to and entertaining to watch. It makes few demands on believers. It is easy to switch from one church to the next.

Ferguson argues that the Americans, by turning religion into just another leisure pursuit, had drifted a long way from Max Weber’s version of the Protestant ethic, in which deferred gratification was the corollary of capital accumulation. They have created capitalism without saving. This decline of thrift turned out to be a recipe for a financial crisis – as has been experienced since 2008. People lived beyond their means and borrowed more than what they could realistically afford to repay.

40

This phenomenon was not uniquely American. Variations of the same theme were played out in other English-speaking countries and ultimately exported to Europe: “the fractal geometry of the age of leverage”. The irony is that as the debt burdens of Westerners increased, the savings of Easterners also steadily increased. Asians work many more hours than their Western counterparts and save more. The rise of the spirit of capitalism in China and elsewhere in South-East Asia has, chronically, gone hand-in- hand with the rise of the Protestant work ethic: working hard and saving more.

When G.K. Chesterton wrote his Short History of England in 1917, he said Christendom meant a specific culture or civilisation. When Christianity declined, “superstition would drown all your rationalism and scepticism”. Today, the West is indeed awash with post-modern cults, none of which according to Ferguson, “... offers anything remotely as economically invigorating or socially cohesive as the old Protestant ethic. Worse, this spiritual vacuum leaves West European societies vulnerable to the sinister ambitions of a minority of people who do have religious faith – as well as the political ambition to expand the power and influence of that faith in their adopted countries. That the struggle between radical Islam and Western civilizations can be caricatured as ‘Jihad vs McWorld’ speaks volumes. In reality, the core values of Western civilization are directly threatened by the brand of Islam espoused by terrorists, derived as it is from the teachings of nineteenth-century Wahhabist Jamal al-Din and the Muslim Brotherhood leaders al-Banna and Sayyid Qutb. The separation of church and state, the scientific method, the rule of law and the very idea of a free society – including relatively recent Western principles like the equality of the sexes and the legality of homosexual acts – all these things are openly repudiated by the Islamists.” (Ferguson, op.cit. pp.289-290)

What is striking about the modern reading of history, is the speed of the Roman Empire’s collapse. Could our own version of Western civilisation collapse with equal suddenness? China and other big Asian countries are narrowing the economic gap between the “West and the Rest”. Some people throw in the spectre of a climate change disaster caused by man-made carbon emissions. It is difficult to weigh the evidence.

Many historians, philosophers and scientists have speculated about the rise and fall of civilisations in cyclical or gradualistic terms. Polybius, following Aristotle, wrote about the following cycle: monarchy – kingship – tyranny – aristocracy – oligarchy, democracy and ochlocracy (mob rule). The American historian Carroll Quigley spoke of the cycle of civilisation as seven ages: mixture, gestation, expansion, conflict, universal empire, decay, invasion.

Ferguson argues that civilisations are highly complex systems made up of a large number of interacting components that are asymmetrically organised. They operate between order and disorder, stable for sometime, in reality constantly adapting but a perturbation can set off a transition from equilibrium to crisis – and fall. The sun set on the British Empire with remarkable suddenness. The Soviet Union collapsed within a matter of months in 1989-1991. It fell off a cliff.

Samuel Huntington predicted that the twenty-first century would be marked by a “clash of civilizations” in which the West would be confronted by a “Sinic” East and a Muslim Greater Middle East and perhaps also the Orthodox civilisation of the former Russian Empire. The fault lines between civilisations will be the battle lines of the future. Numerous objections were raised to this prediction which was made in 1996, but it nevertheless seems to be a better description of the post-Cold War than any competing theories.

41

It is argued that Huntington’s model failed as a prophesy. But much depends on the time-frame and the terminology used to describe the conflicts that occurred since 1996. The Iraq and Afghanistan confrontations, preceded by the 9/11 disaster, certainly fit Huntington’s model. The conflicts in the Sudan, Nigeria and the Ivory Coast involved religious and ethnic confrontations, but ethnic conflicts usually also involve religious cleavages. The fact that local conflicts have not spilled over into a global collision of civilisations is merely a matter of time-bound perspective and terminology. The question is what lies at the root of these conflicts? At the beginning of the second decade of the 21st millennium the major flashpoints were Tunisia, Libya, Egypt, Syria and Afghanistan – all of which involved intra- Muslim cleavages which were largely avoided by the Western powers. This aloofness was partly inspired by strategic caution and partly by the financial constraints on the part of the Western powers to get involved. Exceptions were the Iraq and the Afghan wars, when the West was provoked by jihad action and the domestic political pressure within the United States.

Ferguson provides an appropriate end to this analysis with his assessment of the recent historical challenges facing Western civilisation: “Maybe the real threat is posed not by the rise of China, Islam or CO2 emissions, but by our own loss of faith in the civilization we inherited from our ancestors ... by our own pusillanimity – and by the historical ignorance that feeds it.”

The Ascendancy of Islam

The historical record reveals that by the year 1000 Islam had become a prominent religion in large parts of the world. As the spearheads of Islam continued to probe in all directions, it soon reached the Strait of Gibraltar in the west. Far to the east it reached the mouth of the Indus on the Indian Ocean, the areas north of the Ganges as well as the banks of the Brahmaputra above the Bay of Bengal. After conquering the lands of the Persians they penetrated Central Asia all the way to Samarkand and the areas west of the Chinese Walls. But the discovery of sea routes across the Atlantic and Indian Oceans by the Portuguese and Spanish navigators opened a new field for Christian missionaries. Western Europe also became dominant in science and technology which gave it increased vitality to capture most of the Muslim lands.

After the Second World War, Islam revived. The occupied Muslim lands became independent and Muslim regions benefited from the discovery of large oil deposits in their regions. Oil rich Muslim nations became financiers of the extension of Islamic influence. Millions of Muslims emigrated to Western lands where they flourished and were allowed to practice their religion. Islam maintained an intensity of belief and a pace of growth not matched by its main rival in recent centuries. Although Christendom remained the religion of more than 30 percent of the world’s population, Islam was catching up. With around 12 percent in the year 2000, Christians are estimated to outnumber Muslims only by a ratio of five to four today.

Throughout modern history, religion has been a major foundation for identity and cohesion as a result of its profound hold upon people’s emotions and imaginations. Islam has provided the common bond for both a militant cultural identity and a sense of sacred mission for millions of people. In Islamic countries, religion has taken on an added importance because it regards the secular and the sacred realms as inseparable. Co-existence of different religions or secular communities within Islamic states is particularly difficult, if not impossible. Religious parties see it as their sacred duty to suppress and crush what they see as anti-religious, anti-Islamic movements.

42

All Islamic states are closed societies. With the exception of Dubai, they do not readily allow or attract immigrants, but they do generate millions of emigrants to Western countries. Only four Islamic countries have in recent years experimented with constitutional democracy: Turkey, Indonesia, Lebanon and now also Egypt. In both Turkey and Indonesia the preservation of a fair degree of secular civil rights depends heavily on the intervention of the military.

A typical characteristic of the Arab world is the strong hold of plutocratic cliques at the heart of Islamic regimes coupled with the remarkable resilience of family dynasties such as in Saudi-Arabia, the Gulf States and Morocco. There have been positive stirrings of social change in the wake of the “Arab Spring” of 2011 and 2012. Positive breakthroughs have occurred in Tunisia, Libya, Egypt and Yemen. However, the democratisation process has moved slowly and ambiguously. Satellite television plays an important part to spread information about the world, but a democratic lifestyle can only be built on a deep-rooted process of modernisation in education, training, regulations and cultural habits. Muslim societies across the Arab world today enjoy unprecedented access to information and divergent opinions through al-Jazeera and al-Arabiya. But much needs to be done to penetrate closed societies such as Iran, Afghanistan, Pakistan, Bangladesh and Saudi Arabia.

Compared to the West and the rapidly developing East, Muslim countries are generally under- developed. This state of affairs has much to do with the down-trodden status of women and the narrow focus of their education system. It is claimed that about 50 percent of Arab women cannot read or write. They suffer from unequal citizenship and legal entitlements.

It seems that today, throughout the Islamic world, there are two opposite trends competing for ascendency: Islamic theocracy at one end of the spectrum and secular liberal democracy at the other end. The Islamic theocracy movement is currently the most prominent. The momentum of secular liberal democracy is sporadic and faces many obstacles. Islamic theocracy has several obvious advantages: their messages are cast in religious rather than secular political terms; both their critiques and aspirations are expressed in terms that are familiar and easily accepted on the street level; they have access in the mosques to a communications network that bears the authentic stamp of Islam; secular democrats are required by their own ideologies to tolerate the propaganda of their opponents, whereas the religious parties have no such obligation – and, in fact, go to great lengths to persecute secular or democratic views; Islamic theocrats diagnose the ills of the Islamic world as due to infidels and their local imitators and declare it the sacred Islamic duty to crush the anti-religious, anti-Islamic secular movements.

The bulk of the Islamic populations find themselves somewhere between the opposite positions on the political spectrum. Hence Islam’s main political arms differ greatly in both tactics and aims: from jihadist militancy against infidels to pragmatic co-existential participation in the political process as is currently happening in Egypt, Libya and Tunisia.

The political debate in the Arab world is overshadowed by the issue of Israel. It looms larger than anything else in Arab minds and distorts the internal Arab debate about politics and government. Iran has turned the Palestinian conflict with Israel into a tool against America’s Arab allies, arousing anti- American passions on Arab Street. Pro-American regimes lack democratic legitimacy and are presented as lackeys of a resented superpower. Many Arabs reject the idea of peaceful co-existence with Israel. This conflict tends to override internal quarrels between secular and religious Sunni and Shia, or left and right. Their hatred for Israel is an intoxicating way to ignore their own failings and to

43 blame someone else. It enables the plutocratic regimes to maintain states of emergency at home and postpone reform. There will be no new dawn without solving the Palestinian problem.

Islam’s networks, like the Jewish network, are global in their reach. The most prominent are Salafism, Wahhabism and the Muslim Brotherhood. Their pan-Islamic movements arose in reaction to what was perceived as the threat of Western colonialism. All three movements focus on the common cultural, linguistic, historical and religious links between the Arab countries which are held up as the backbone of Islam. Pan-Arabism is seen as a prerequisite for Muslim unity. The Muslim Brotherhood appears to be the strongest in terms of numerical support and organisational coherence. It expects certain virtues of body, mind and behaviour to be upheld by members. It works through other movements and fronts or proxies. It takes advantage of any opportunities available. The control centre of the Brotherhood is in Egypt and much of their activities are financed out of Saudi Arabia. Its primary aim is to promote Islam and the introduction of Sharia law, which they believe will happen once people have convinced themselves of its virtues.

In recent years a range of additional jihadist secret networks have appeared: Al Qaeda, Hisb ut-Tahrir (HT), Jemaah Islamiah, Hamas, Boko Haram, Al-Shabaab and Islamic State (IS, or ISIL, or ISIS). They all share common ideological origins but differ over politics and operational strategies and tactics. Many take advantage of the West’s civil rights guarantees of freedom of conscience, assembly and speech. In this way they are able to spread hate-filled messages and to create fifth column activists working to exploit and undermine the very systems under which they live in the West. Western governments and societies – imbued with naive ideological tolerance, moral relativism and normative nonchalance - do not seem to understand the ideological threat posed by radical Islam. It is essential that they find ways to protect themselves not only from terrorism but also from the indirect incitement and subversion by militant organisations. Close to 20 million Muslims live in Western countries, which means the host countries will have to deal with a growing minority of disaffected Muslims. Theological struggles, like ideological contests, can last many generations.

Jihadist movements are constantly flaring up in many parts of the world: Pakistan, Afghanistan, Bangladesh, Indonesia, the Philippines, Myanmar, the former Soviet Republics, Syria, Lebanon, Palestinian territories, the Gulf States and Africa, north of the equator over a vast terrain in and around the Sahara that stretches eastward across Somalia to the Horn of Africa. Al-Qaeda inspired militants are the driving force in many cases, taking advantage of the mismanagement of corrupt governments. These extremist Islamic groups recruit ill-educated, jobless and angry Muslim youngsters to wage a campaign of violence and murder. They are operating throughout the arc of instability stretching from Somalia in the east through Chad to Mali in the west. Extra money is collected from sponsors in Saudi Arabia and other sources in the oil-rich Gulf.

The global jihad movements capitalise on local grievances and ignorance. It radicalises jobless young Muslims, giving their discontent a dangerous edge which poorly trained and ill-equipped local security services cannot contain. As in Kenya, Somalia and the Sudan, the tensions created by the jihadists easily spill over into tensions between Muslims and Christians. The conflict in one country spills over to its neighbours. Unless Muslims themselves turn hostile to jihad, outside intervention can do little more than douse sporadic flare-ups of violence. Only when local communities become more prosperous, better educated and better governed will the jihadist menace be quelled.

44

Bibliography

Bara, Z. (2005) “Fighting the War of Ideas”, Foreign Affairs, Vol. 84, No. 6, pp.68-78 Blainey, G. (2011) Christianity – A Short History, Penguin Group, Victoria Caldwell, C. (2008) Reflections on the Revolution in Europe: Immigration, Islam and the West, Doubleday David, P. (2007) “Special Report on Iran”, The Economist, July 21st, 2007, pp.3-16 David, P. (2009) “Special Report on the Arab World”, The Economist, July 25th, 2009, pp.3-16 Davies, N. (1996) Europe – A History, Oxford University Press, Oxford Ferguson, N. (2008) The Ascent of Money – A Financial History of the World, Allen Lane, London Ferguson, Niall (2011) Civilization – The West and the Rest, Allen Lane, London Gibb, H.A.R. (1997) “Islam”, Encyclopaedia of the World’s Religions, Barnes & Noble, pp.166-199 Greenstock, J. (2004) “What Must be Done in Iraq”, The Economist, May 8th, 2004, pp.23-26 Grimond, J. (2003) “A Survey of Iran”, The Economist, January 18th, 2003, pp.3-16 Hattstein, M. (2006) World Religions, Krönemann, Cologne Huntington, S.P. (1996) The Clash of Civilizations – and the Remaking of World Order, Simon & Schuster, New York Johnson, P. (1987) A History of the Jews, Phoenix Press, London Kepel, G. (2004) The War for Muslim Minds, Belknap Press Landau, D. (2012) “Judaism and the Jews” in The Economist, Special Report, July 28th, 2012, pp.1-12 Lewis, B. (2009) “The Arab World in the Twenty-first Century”, Foreign Affairs, Vol. 88, No. 2, pp.77-88 Lichfield, G. (2008) “Special Report on Israel”, The Economist, April 5th, 2008, pp.3-16 Nasr, V. (2009) Meccanomics: The March of the New Muslim Middle Class, One World Rieffel, Lex (2004) “Indonesia’s Quiet Revolution”, Foreign Affairs, Vol. 83, No. 5, pp.98-110 Rodenbeck, M. (2002) “A Survey of the Gulf”, The Economist, March 23rd, 2002, pp.3-28 Rodenbeck, M (2006) “A Survey of Saudi Arabia”, The Economist, January 7th, 2006, pp.3-12 Roy, O. (2004) Globalised Islam: The Search for a New Ummah, Columbia University Press Shook, J.R. (2010) The God Debates, Wiley-Blackwell, Chichester Werblowsky, R.J.Z. (1997) “Judaism, the Religion of Israel” in Zaehner (ed) Encyclopedia of the World’s Religions, Barnes & Noble, New York Wright, R. (2009) “Islam’s Soft Revolution”, Time Magazine, March 30th, 2009, pp.28-32 Zaehner, R.C. (1997) Encyclopaedia of the World’s Religions, Barnes & Noble, New York The Economist (2003) “Dealing with Iraq”, February 15th, 2003, pp.23-25

45

46

4. Migration Patterns in the Colonial Era (1400-1940) (February 2015)

By the beginning of the era of recorded history, the world was already populated by millions of people with distinct characteristics in terms of appearance, physical characteristics and cultural traits who have occupied certain territories for many centuries. Although the evolutionary path of each cluster of communities is shrouded in mystery and a topic of heated debate, their progress was associated with the development of tools and technology as well as social and intellectual skills. These enabled humans to penetrate much of the habitable world: Africa, the Middle East, South and South East Asia and Europe.

The early human diaspora to different regions of the world brought in its wake the development of a divergence in physical types, presumably due to environmental factors, to isolation and to breeding patterns. Physical anthropologists distinguish four categories of homo sapiens: the Caucasoid (or Indo- European), Negroid (African), Mongoloid (Asiatic, Amerindian and Polynesian) and Australoid (Australian Aborigines). Within these major categories, mankind is divided into numerous clusters or categories based on perceptions of race, religion, ethnicity, class or a combination of these. In most instances, these categorisations are based on subjectively ascribed considerations, but are, nevertheless, real in their consequences for the persons involved.

Demographic information is often deficient because few countries were able to keep comprehensive records of population losses or gains covering several centuries. Census data often contains cumulative errors. Many refugees and migrant workers enter countries “illegally” or “unofficially” and are unrecorded or “undocumented”. Emigrants may have a variety of motives for leaving, thus making it difficult to distinguish between voluntary and involuntary migrations. Some people leave temporarily, but never return.

The Emigration of Europeans

Europe has supplied most of the world’s emigrants over the past four centuries up to the Second World War. During this period about 70 million Europeans moved to the Americas, Oceania and Africa. The main directions of the migration flows are shown in the bar charts published by Austin Ranney, The Governing of Men, (Holt, Rinehart and Winston, New York, 1966, pp.141-142), reproduced as Figures 1, 2 and 3. See also Table 1.

These waves of migration transferred tenets of European culture across the face of the earth and thereby transformed vast regions as new skills, habits, attitudes and institutions reshaped the environments to which the European emigrants moved. This “Westernisation” of large parts of the world was accompanied by the birth of several new nations. The most prominent of these are the USA, Canada, Brazil, Argentina, Chile, Australia, New Zealand and South Africa. In these countries European settlers established modern societies which rank amongst the most affluent in the world. Each new society reflected the cultural traits of the predominant settler communities. To understand the impact of the migration of millions of people from Europe to the Americas, Australia, New Zealand and South Africa, it is instructive to imagine what the outcomes would have been without the arrival of the migrants from Europe. In the wake of the waves of immigrants came a host of influences: values, belief systems, traditions, ideals, knowledge, skills, practices, institutions and other ways of doing things. The migrants moved for a variety of reasons. Some were driven by religious considerations (either persecution or aspiration), some were “indentured labourers” or soldiers, some were forcefully

47 transported as slaves (or equally pernicious) as exiled “convicts”. But the largest proportion was attracted by the expectation of a better life. The colonial era was successively dominated by the Portuguese, Spaniards, Dutch, French and ultimately, since the late 18th century, by the British.

The Portuguese Influence

Although Portugal was the smallest and poorest of Europe’s imperial powers, it acted as a trailblazer for other colonial powers. Prince Henry, “the Navigator”, who reigned from 1394 to 1460, set up a school for navigation at Sagres, developed strong and manoeuvrable caravels and reliable compasses which assisted its early explorers to navigate the stormy Atlantic Ocean and beyond. Starting with voyages to Madeira and the Azores they sailed along the West African coast until Bartholomew Diaz sailed around the Cape as far as Mossel Bay on the east coast in 1488. Vasco da Gama reached India in 1500 and thus opened the trading route between the Far East and Europe. Thereafter Portuguese sailors continued to explore the coasts of Africa and East Asia, establishing forts and trading stations as they went. By 1571 a string of outposts connected Lisbon to Nagasaki. Threatening disputes between Portugal and Spain were settled by the Treaty of Tordesilhas in 1494 which divided the world outside Europe into an exclusive duopoly between the Portuguese and Spanish along a north-south meridian, 1500 km west of the Cape Verde Islands. This meant that much of the 16th century the Portuguese fleet served as the dominant force in the Indian Ocean. Using fortresses on the islands of Mozambique and Mombasa, Madagascar and Mauritius, it succeeded in maintaining Portuguese hegemony in the Indian Ocean. Goa was used as the headquarters of Portuguese rule in India. Their trading network extended to Malaysia, Java, China and Japan. Not only did they dominate the trade between Asia and Europe, but also much of the trade between different regions of Asia. In time the Portuguese opened the spice trade with the “Spice Islands” in the Moluccas and dominated the Persian Gulf from 1515 for the next hundred years.

The Portuguese established colonies in Mozambique, Angola and in Brazil where large stretches of land were donated in the form of hereditary captaincies to grantees able to support settlement and to administer and carry the costs of colonisation. Some recipients were successful as sugar cane planters who required intensive labour supplies which would ultimately be met by the slave trade.

As in Portugal itself, the Portuguese were slow to introduce liberal concepts related to elected democratic government. There was more emphasis on a “Pan-Lusitanian” community between Portugal and its colonies, united by the spiritual links peculiar to the Portuguese culture, human “brotherhood”, equality before the law: “one state, one faith, one civilisation”. Colonies were regarded as “ultramar” components of the motherland. In practice, a hierarchy of status existed with “indigena”, the natives at the bottom, “assimilados” (“mulattos”) higher up and whites on top. To be an “assimilado” a person had to be over eighteen, speak Portuguese, be of good character, have performed military service and be earning enough to support his family. The Portuguese were focused on a gradual fusion of peoples and cultures under the Portuguese umbrella. Acculturation was seen as a process of “social Darwinism” – a three-stage assimilation process: the destruction of traditional societies, followed by the inculcation of Portuguese culture and finally, the integration of the “detribalised” into Portuguese society. (See Paul Nugent, Africa Since Independence, Second Edition, Palgrave MacMillan, N.Y. pp.12-16)

The Portuguese colonial empire was subjected to a number of significant internal pressures: the small size of the Portuguese population, the unpopularity of settlement in the colonies and the lack of significant economic pull factors or stimulants within the colonies. The economies of Angola and

48

Mozambique remained highly related to the slave trade for many generations. The inherent corrupting nature of the slave trade which dominated the colonial administration for such a long time, inhibited the emergence of an honest, efficient and civic-minded administrative culture. To induce homeland Portuguese to move to the African colonies, the Portuguese government resorted to releasing “degradados” (convicts) from prison in exchange for accepting exile in Africa. Thus Angola, in particular, gained a reputation as a Portuguese penal colony. But the European populations in Portugal’s African settlements remained small. By the time the Portuguese colonial rule ended in 1974, the white population numbered only around 400,000, mulatos around 100,000 and blacks more than four-and-a-half million.

The jewel in the Portuguese crown was Brazil. Today, with an area of 8.5 million square kilometres, Brazil has the fifth largest land area in the world, a population of close to 200 million, abundant water, forests and arable land, a long growing season which often allows two harvests per annum, it is a country with much opportunity and potential. It has also discovered a vast field of offshore oil deposits beneath the Atlantic seabed which has been described as its “passport to the future”.

Around 80 percent of Brazil’s population is urbanised with Sao Paolo and Rio de Janeiro as its largest cities. Its ethnic composition is around 53 percent white, 40 percent mixed white and black, 6 percent black and 2 percent indigenous tribal. Portuguese is the official language. The Republic of Brazil was declared in 1889 after 300 years of direct Portuguese rule. A new capital, Brasilia was inaugurated in 1960. Of the millions of Europeans who crossed the Atlantic to settle in Brazil, around 30 percent were Portuguese, 34 percent Italian, 14 percent Spanish, 4.5 percent German and 18 percent “others”. It is claimed that many early settlers lived for the present, looking for quick enrichment. Today, Brazilians, with their abundance of natural resources, are beginning to take a longer perspective and may yet become the powerhouse of the Southern Hemisphere.

The Spanish Influence

In 1492 explorer Christopher Columbus sailed west from Spain and after 61 days at sea made landfall in the Bahamas, thinking that he had reached the eastern seaboard of Asia. Columbus had in fact reached the Americas and paved the way for millions of future immigrants.

Columbus made three successive voyages across the Atlantic during which he explored the Caribbean and the northern coastline of South America. Many explorers subsequently sought in vain to find a westerly passage to the east. It was only realised by the middle of the 16th century that the discovery of the Americas was a major prize by itself. Many merchant sailors followed in Columbus’s wake and took gold and silver treasures from the Aztec and Inca natives they encountered – misnaming them as “Indians”.

On the first Columbus voyage in 1492, one of his ships ran aground on an island he called Hispaniola (little Spain). He left a few members of his crew on the island as the first colonial settlement in the New World. By the time Columbus returned to the island a year later, all of the settlers had been killed by the local tribe. Hispaniola was the starting point for further exploration of the Caribbean Sea after the Spanish conquered the islands of Cuba and Puerto Rico and made landfall on the North American mainland. Florida was found in 1513 and in the 1520s Spanish expeditions made their way up the eastern seaboard of North America as far as Labrador in present-day Canada.

49

The Spanish focused their attention on the Aztec kingdom of Mexico where they believed gold could be found in abundance. In 1518, the Spanish Conquistador, Hernan Cortez, sailed to the New World with a small fleet of ships carrying 600 soldiers with crossbows and fire arms in addition to several hundred Indian servants and African slaves. He also carried 16 horses, the first to be seen on American soil. Montezuma invited Cortez and his men into his capital, who then took the emperor into custody and subsequently destroyed the city, killing thousands of Aztecs. Cortez took over the Aztec empire and laid the foundation for what became Mexico.

The Spaniards also brought diseases which in time killed thousands amongst the native peoples in Mexico. Smallpox carried by the Spanish traders also spread into Inca territories so that in 1532, Francisco Pizarro’s tiny force easily captured the Sun God, the Inca emperor Atahualpa. The following year they captured Cuzco. After the smallpox epidemic, measles followed as well as typhus, influenza, whooping cough, scarlet fever, chickenpox and even malaria – all new to the inhabitants and therefore deadly.

Of the estimated eight million Indians in Mexico and areas south of the Great Lakes when Cortez arrived, less than one-third survived fifty years later. In the empire of the Incas, far south, the death toll also numbered millions. Even Indians taken back to Europe as objects of display were prone to catch the new diseases. When the Frenchman, Jacques Cartier returned from Canada in 1534 with ten American Indians, nine died from European diseases. The European diseases had disastrous effects on the native Amerindians. Their cultural and economic life largely disintegrated.

In the course of the 16th century, around 250,000 Spaniards, mostly men, settled in the New World. Many took wives from among the native populations and so gave rise to mixed race offspring called the “mestizos”. When African slaves began to be imported to South America in large numbers since the early 1500s, female slaves were also taken as concubines by the ruling Spaniards and Portuguese. Children of Afro-Hispanic parentage were known as mulattos. In time, the colonial Spanish society came to be organised according to a legally sanctioned grading of skin colour. “Pureblood” Spaniards were at the top of the social pyramid, while native Amerindians and black-skinned Africans were at the bottom, with all the varieties of mestizos and mulattos occupying the middle levels.

The Dutch Influence

In 1581, Jan Huygen van Linschoten was the first Dutchman to complete a return voyage to India. Subsequently in 1595, Cornelis and Frederik de Houtman also completed a successful return voyage and opened the way for further Dutch involvement. Soon the need arose for a halfway station where sailors, desperately affected by scurvy, could find fresh meat and water. By the 1650s, the Dutch were the world’s leading trading nation and the Vereenigde Oost-Indische Compagnie (VOC), the world’s largest trading enterprise with a strong trading post in Jakarta (Batavia). In 1652, established a refreshment station at the Cape.

At the time various spices – including black pepper, cloves, nutmeg, cinnamon and mace – were essential for flavouring and preserving food. For centuries these commodities had come overland from Asia to Europe along the arduous and dangerous Spice Road. With the discovery of the sea route to the East Indies via the Cape of Good Hope, attractive new trading opportunities opened up as Dutch merchants gradually succeeded to wrest control of the lucrative Asian spice trade from Portugal and Spain early in the 17th century. Despite its small land area and population size, the United Provinces of the Netherlands rose in commercial prominence from the beginning of the 17th century until the start

50 of the French Revolution in 1789. The 17th century is often described as the Golden Age of the Dutch Republic. Its merchants were the most successful businessmen in Europe and the VOC was the world’s largest trading corporation. It operated under a charter from the State-General (the Dutch government) and was given sovereign rights in and east of the Cape of Good Hope. By mid-century it was the dominant maritime power in south-east Asia with its fleet numbering some 6,000 ships totalling around 600,000 tons and manned by around 48,000 sailors.

The purpose for the establishment of a refreshment post at the Cape was to obtain fresh food for shipping fleets and to treat scurvy-stricken seamen – not to establish a costly, expansive new colony. A fort was built, a small number of “free burghers” (VOC employees released from their contracts) were given small pieces of farm land and slaves were imported to work on VOC property and on “free burgher” farm land. In time a growing number of slaves were brought from the East to work as masons, carpenters, tailors, cooks and other trades. The Cape slaves came from diverse linguistic, religious and social backgrounds. A few came from African territories, but more from Madagascar and still more from Indonesia, India and Ceylon – including a small number of Muslims. From 1711 onwards, there were more slaves than free burghers in the colony. The majority of slaves were males.

Initially the slaves were exclusively employed by the VOC, but in time as more settlers were given farm land, the wealthy wheat and wine growers in nearby districts also acquired slaves to work on their farms and in their households. Stock farmers in the more remote rural areas employed only a small number of slaves together with Khoikhoi herders. By the end of the Dutch colonial rule at the Cape towards 1800, there were 25,000 slaves in the colony, compared to around 22,000 European settlers.

The Dutch connection brought in its wake a wide ranging impact on the social, economic, cultural, intellectual and political life in what is today called South Africa, which lasted more than 350 years. The most important influences exercised by the first European immigrants included Roman-Dutch law, the Reformed Christian Religion, early capitalism and a compound of perceptions and precepts of social organisation which included European egalitarianism and Oriental hierarchical concepts and practices which the VOC developed in running its colonies in the Far East. In time, these perceptions and practices found expression in the community and race relations prevalent at the Cape. (See Hermann Gilliomee, New History of South Africa, 2007, and also Karel Schoeman, Die Suidhoek van Afrika – Geskrifte oor Suid-Afrika uit die Nederlandse Tyd 1652-1806, Protea Boekhuis, Pretoria, 2002).

The British Influence

As described by Niall Ferguson in his remarkable historical survey called Empire – How Britain Made the Modern World (Penguin Books, 2004), the British Empire began as a primarily economic phenomenon, its growth powered by commerce and consumerism: “The demand for sugar drew merchants to the Caribbean. The demand for spices, tea and textiles drew them to Asia. But this was from the outset globalization with gunboats. For the British were not the first empire-builders, but pirates who scavenged from the earlier empires of Portugal, Spain, Holland and France. They were imperial imitators”. (Ferguson, op.cit., p.xxv)

During the period of Spanish conquests in the New world, the English gradually also entered the Caribbean Sea to seek their own “El Dorado”. They started to exploit their skills as sailors to pirate gold from the Spanish ships and settlements. The English Crown legalised the buccaneering in return for a share in the proceeds. The names of buccaneers Henry Morgan, Francis Drake and Walter Raleigh

51 became famous as “Brethren of the Coast” in partnership with the British Crown. In the process the British acquired a string of islands in the Caribbean Sea such as Jamaica, Trinidad and Barbados. The French also acquired islands such as Martinique and Guadeloupe, which are still part of France today. (See Blaney, G., A Short History of the World, Penguin Books, 2000, pp.300-332).

Ferguson describes the expansion of British influence as “Anglobalization”. According to Ferguson, this process was characterised by its reliance on free enterprise, private ownership, competitive markets, comparatively limited government intervention, a legal system heavily laden with “common law”, representative parliamentary government, constitutional democracy and a pragmatic philosophical orientation. At the height of its power, the British Empire encompassed forty-three colonies in five continents, held sway over around one quarter of the world’s land surface and roughly the same proportion of the world’s population. Some 444 million people lived under some form of British rule. For many generations the British Empire relied heavily on the export of its people, capital and culture – particularly its language which has become the lingua franca of today’s world. Britain was also the world’s banker, investing large sums of money around the world. By 1914 the gross nominal value of Britain’s stock of capital invested abroad was £3.8 billion – almost half of all foreign-owned assets. (See Ferguson, op.cit., pp.240-244)

Wherever the British extended their sphere of influence, they disseminated certain distinctive features of their own society: the English language, English forms of land tenure, British banking, Common Law principles, Christianity, team sports, the limited state, representative assemblies and the ideal of civic liberty.

British colonisation was a vast movement of peoples, unlike anything before or since. Some left the British Isles in pursuit of religious freedom, some in pursuit of political liberty, some in pursuit of profit. Others had no choice, but went as convicted criminals. Between the early 1600s and the 1950s, more than 20 million people left the British Isles to begin new lives across the seas. No other country came close to exporting so many of its inhabitants. An important role was played by voluntary, non- governmental organizations such as evangelical religious sects and missionary societies. All contributed in paving the way for the expansion of British influence. The British came close to establishing the first “effective world government”. This was achieved with a relatively small bureaucracy roping in indigenous elites. But the use of military force was a key element of British imperial expansion. The central role of the British navy was evident around the world: first in its pirate role and later also as transporter of soldiers to the far ends of the world.

The scale of 17th and 18th century migration from the British Isles was unmatched by any other European country. From England alone, total net emigration between 1601 and 1701 exceeded 700,000. These large movements of population transformed cultures and complexions of whole continents. The scale of British migration can be brought into perspective by considering that the total world population in 1700 stood at less than 1 billion.

In Elizabethan England, a Vagrancy Act passed in 1597 stated that “Rogues, Vagabonds and Sturdy Beggars” were liable “to be conveyed to parts beyond the seas”. Those parts referred to the British colonies in North America. Prisoners condemned to death by English courts could potentially have their sentences commuted to deportation. Some of the deportees were “common criminals”, but political dissidents were also disposed of in this fashion – frequently used as punishment for Irish dissidents. Prisoners were sent to Virginia or Maryland to work on tobacco plantations until the growing number of slaves exported from Africa replaced them.

52

In 1718, Britain passed a Transportation Act which established a seven-year banishment to North America as a possible punishment for lesser crimes and also stated that capital punishment could be commuted to banishment. So systemic exile became a part of England’s justice system. It was thought to be advantageous for everybody involved. It was considered more humane than executing or flogging, it “offered” the possibility of “moral rehabilitation” and freedom afterwards, it rid the population of dangerous individuals, it deterred others tempted to commit crimes, it provided workers where there was a great want of servants.

Transportation to North America continued for nearly 60 years and only ceased when the American colonies revolted in 1776. By that time over 40,000 criminals had been shipped to the New World. This English practice “of emptying their jails into our settlements” was roundly rejected by the colonial elite and ultimately resulted in the Declaration of Independence in 1776 and ultimate formation of the United States of America in 1887.

When America refused to accept further shipments of convicts, England’s prisons began to overflow. They were first accommodated in decommissioned ships, called “hulks” for several decades, but after James Cook’s discoveries, sending convicts to New South Wales, Australia, solved the problem. The last convict ship left England for Australia almost a century later in 1868. By then 161,021 men and 24,900 women had been sent as convicts to Australia. Over the next two centuries millions of free settlers emigrated from the British Isles to Australia. (See Russell King ed., Origins – An Atlas of Human Migration, Marshall Editions, 2007, pp.95-105)

As the largest recipient of European emigrants, the USA is also the most dramatic example of the economic and cultural changes these emigrants wrought. Of the 70 million people who emigrated from Europe over the past three centuries, nearly 50 million went to the United States. About 40 million of these arrived in just one century from 1840 to 1940.

The Impact of the Slave Trade

The first reference to slave-catching activities in the mountains of the central Sahara is found in the writings of Herodotus dating back to the mid-first millennium BC. It appears that slaving and slave- trading were already in progress throughout the sub-Saharan savannah. Slavery was achieved by a process of the wide dispersion of captives. Throughout the whole northern half of Africa the slave trade received considerable impetus from the rise and spread of Islam. It is claimed that the prophet Muhammad lived his life in a slave-raiding, slave-owning and polygamous society in which it was customary for the men of defeated groups to be put to the sword and for women and children to be taken as slaves by the victors. (See R. Oliver, The African Experience – From Olduvai Gorge, to the 21st Century, London, 1999, pp.132-147)

Fresh sources of supply were found among the black peoples to the south of the Sahara. Specialised cavalry operations were developed to capture slaves. Thousands of women and children were taken as slaves after their men were killed. It was an utterly traumatic process in which individuals were snatched from their homes and kindred amid scenes of horror and violence and carried into the unknown, where further hardship awaited all but the most fortunate. While the Muslim slave-traders concentrated more upon the export trade of the Red Sea, which became the main source of African slaves reaching Arabia and the lands surrounding the Persian Gulf. Along the Indian Ocean coast of

53

Africa, a series of maritime city states developed where wealthy Swahili citizens owned slaves that were captured or bought from peoples in the interior. During the late medieval and early modern times many black slaves were transported to Madagascar.

“The oldest directions of the salve trade were those which crossed the Mediterranean, the Isthmus of Suez and the Red Sea from classical times onwards. The Muslim era saw a great increase in the trade from north-eastern Africa into south-western Asia. In later medieval times the African entrepôts of this trade proliferated southwards down the Indian coast, while its points of delivery extended eastwards to western India, Bengal and South East Asia. But the most dramatic increase in the intercontinental trade came with the opening of the Atlantic coast of Africa by European seafarers in the middle of the fifteenth century. The Portuguese were the first to arrive, and they maintained a near monopoly for a century and a half before they were joined by the British and the French, the Dutch and the Danes.” (See R. Oliver, op.cit., p.140)

By the end of the 17th century, stimulated by the plantation agriculture in Brazil and the West Indies, Atlantic shipments had increased to about 30,000 a year and by the end of the 18th century they reached nearly 80,000. During the same period the trans-Saharan trade may have risen from about 5,500 to about 7,000 slaves a year and the Red Sea and Indian Ocean trade may have risen to about 4,000 slaves a year. A total of around 12 million Africans were shipped to the New World. (See R. Oliver, op.cit., p.145)

The Atlantic coast of Africa was virgin territory for slave traders. Before the Portuguese, no one else from the outside world had established more than a passing contact. But at most of their ports of call, the Portuguese found a commercial infrastructure already in existence which was capable of supplying them with viable cargoes of slaves – and of distributing European goods, mainly textiles, metals and hardware, which they brought in exchange. In Senegambia and on the Gold Coast, they traded with the same Dynla merchants who supplied the caravans of the Western Sahara. Both in Benin and the Niger delta there was an established system for the marketing of slaves brought down the rivers from the interior. In the Kongo kingdom on both sides of the Congo River, there was a large distribution network which brought captives down to the coast.

New research has shown that at the height of the Atlantic slave trade, most slaves originated as war captives. The causes of warfare appear to have been essentially local as a result of inter-ethnic conflict. The beaches between Badagry and Whydah in Dahomey became known as the ‘Slave Coast’. Fully one- third of the around twelve million slaves who crossed the Atlantic between the 15th and the 19th centuries came from Africa south of the equator, between the Cameroon estuary and the Kunene. Overwhelmingly they came from captives taken on the frontiers of the Kongo kingdom. Later the Mbundu and Yaka tribes turned the tables by taking slaves from the Kongo which they exported down the Kwanza valley to Luanda where Portuguese traders were already settled at the nucleus of what was to become the colony of Angola.

According to Roland Oliver, the Portuguese came closer than any other Europeans of the slave-trading period to direct involvement in the process of enslavement. From Luanda they built a series of forts up the Kwanza valley, of which the garrisons became the military overlords of the local Mbundu chiefs from which they levied annual tribute of so many slaves. From their coastal bases the Portuguese also regularly sent African trading agents (‘pombeiros’) to buy slaves at the interior markets. The Atlantic slave trade differed sharply from the rest of the slave trade within Africa in that two-thirds of those

54 transported were males. Plantation owners of the New World were prepared to pay more for men than for women and children.

Portugal extended its slave trade as far as the coasts of China. Many of the Chinese slaves brought to Portugal were children and many were shipped to the Indies. Chinese prisoners were brought to Portugal where they were sold as slaves. They were preferentially prized compared to Moorish and black slaves. Even children, kidnapped in China, were brought to Portugal and sold as slaves. The Portuguese preferred Chinese as domestic or household workers. Goa, Manila and Malacca were the major ports from where slaves were shipped out. In 1624 the King of Portugal issued a decree forbidding the capture of Chinese slaves. The ban on Chinese slavery was again issued in 1724.

For over a century Portugal enjoyed a virtual monopoly on the African seaborne trade. They imported thousands of slaves annually to Portugal where close to 10 percent of the population was constituted by Black African slaves. By 1550, Brazil was the world’s largest exporter of sugar and the largest importer of slaves.

Other European nations soon joined in: British, Swedes, Danes, Spanish, French and Dutch. These trading nations built over 30 slave forts in the Gold Coast (now Ghana) alone. The “triangular trade” as it was known, involved slave-ships leaving European ports for west Africa with rum, guns, textiles and other goods to exchange for slaves, who were then transported across the Atlantic to be sold to plantation owners, the ships then returning with sugar and coffee.

It is important to note, however, that the pernicious trade could not have existed without the support of African chiefs and traders. Slaves were brought to the coastal trading forts by African agents or collaborators. Many of the slaves sold to the European traders were men and women captured in battles between tribes – like the Asante and the Acan in the Gold Coast.

By the mid-18th century, Britain was the biggest slaving nation. Ports like Bristol, Liverpool and London thrived as a result. In Britain, many important people were involved in the slave trade: the royal family, the Church of England and politicians like William Gladstone – himself the son of a plantation owner. Since there was no slavery in Europe or in England, people were largely ignorant of its nature and scale – not unlike the Germans vis-à-vis the holocaust in Germany. It became the task of the abolitionists to expose the shameful reality of the trade to an ignorant public.

During the period 1701 to 1810, millions of Africans were sent to the Americas: estimates of the numbers involved vary between 7 million up to 20 million. In the early 1800s it was abolished in Europe, in 1833 in England and in the 1860s in Portugal. The Arabs continued the trade until 1873, when the last remaining slave market in Zanzibar was closed. (See The Economist, “Breaking the Chains”, February 24th, 2007, pp.55-57)

It is an irony of history that the same navy that transported millions of slaves to the Americas was also deployed to abolish the slave trade. It was also instrumental in expanding the narcotics trade. Between 1662 and 1807 nearly 3.5 million Africans came to the New World as slaves transported in British ships. That was over three times the number of white migrants in the same period. By 1700, Liverpool was sending 33 shipments a year on the triangular trip from England to West Africa to the Caribbean. John Newton, the composer of the song Amazing Grace, was a captain of a slave ship. In 1840, James Thompson composed his famous song Rule Britannia with its stirring words “Britons never, never shall be slaves”.

55

By 1770, Britain’s Atlantic empire seemed to have found a natural equilibrium. The triangular trade between Britain, West Africa and the Caribbean kept the plantations supplied with slave labour. The American colonies kept them supplied with victuals. Sugar and tobacco flowed back to Britain, a substantial proportion for re-export to the Continent. The profits from these New World commodities oiled the wheels of the Empire’s move to a new frontier – the Asian commerce.

Anti-slavery sentiments amongst colonists in the USA were first openly expressed as early as the 1680s. The Quakers of Pennsylvania were speaking out against it, arguing that it violated the biblical injunction of Matthew 7:12 “… do unto others as you would have others do unto you”. But it was only in the 1740s and 1750s that the ‘Great Awakening’ in America spread such scruples into wider Protestant circles. By the 1780s the campaign against slavery gained enough momentum to sway legislators. Slavery was abolished in Pennsylvania in 1780 – an example followed by a number of other northern states.

In Britain the slave trade was abolished in 1807. Henceforth, convicted slave-traders faced transportation to Britain’s penal colony, Australia. Once the slave trade was abolished, slavery itself could only wither, until in 1833, slavery itself was made illegal in British territory. The slave owners of the Caribbean were compensated with the proceeds of a special government loan. It did not put an end to the trans-Atlantic slave trade or slavery in the Americas. It continued on a smaller scale in the southern United States, but also on a far larger scale in Brazil. All told, around 2 million more Africans crossed the Atlantic after the British ban, most of them to Latin America. However, the British did put in a lot of effort to disrupt this continuing traffic.

A British West African Squadron of 30 warships was sent to patrol the African coast from Freetown with bounties offered to naval officers for every slave they intercepted and liberated. In 1840 the Royal Navy intercepted no fewer than 425 slave ships off the West African coast. The British Parliament made slave trade illegal throughout the empire in 1807, shortly after the re-occupation of the Cape. In 1816 a slave registry was introduced and minimum standards for food, clothing, hours of work and maximum punishments was laid down in 1823, and the compulsory recording and limitation of punishments in 1826. Slaves were set free throughout the British empire under a law of December 1833. This Act allowed for a period of four years’ apprenticeship for domestic slaves before they were free to leave their masters’ service. The Act had been drawn up with West Indian conditions chiefly in mind, and Cape owners felt themselves not only under-compensated for the value of their slaves, but unable to draw the compensation money which was payable only in London. Emancipation took effect in 1838-40.

The Impact of Western Colonisation

Niall Ferguson subsequently also expanded on the footprints left by the Western colonial powers in Civilization – The West and the Rest (Allen Lane, London, 2011). He argued that no previous civilisation had ever achieved such dominance as the West over the Rest. He stated that in 1500 the future imperial powers of Europe accounted for about 10 percent of the world’s land surface and at most 16 percent of its population. By 1913, eleven Western empires (Austria, Belgium, France, Germany, Italy, the Netherlands, Portugal, Spain, Russia, the UK and the USA) controlled nearly three- fifths of all territory and population and more than three-quarters of global economic output. He further claimed that the world’s economic debates were dominated by three Western schools of thought: followers of Adam Smith, John Maynard Keynes and Karl Marx. Ferguson also included the

56 non-economic world: the scientific method of inquiry, universities, healthcare, agriculture, clothing styles, work-weeks, religious beliefs (Christianity and atheism), humanism, individual rights, the rule of law. Ferguson maintained that what distinguished the West from the rest were six novel complexes of institutions and associated ideas and behaviours: competition, science, property rights, medicine, consumer society and work ethic. (See Niall Ferguson, Civilization – The West and the Rest, op.cit., pp.11-18)

Table 1

Major World Migrations Since 1500 *

million 1. Europeans to the USA (1600 – 1990) 50.0 2. North African/East African slaves to Arabia (1500 – 1900) 4.3 3. Spanish to South & Central America (1530 – 1914) 2.3 4. Portuguese to Brazil (1530 – 1914) 1.5 5. West African slaves to South America (1550 – 1860) 4.6 to the Caribbean (1580 – 1860) 4.0 to North America (1650 – 1820) 1.0 6. British & Irish to North America (1620 – 1914) 13.5 to Australasia (1620 – 1914) 3.0 to South Africa (1620 – 1914) 1.0 7. Chinese to South-East Asia (1820 – 1914) 22.0 to North America (1880 – 1914) 1.0 8. Indians to East & South Africa (1850 – 1914) 1.0 9. Germans to North America (1850 – 1914) 5.0 10. Poles to North America (1850 – 1914) 3.6 11. Austro-Hungarians to North America (1850 – 1914) 3.2 12. Scandinavians to North America (1850 – 1914) 2.7 13. Italians to North America (1860 – 1914) 5.0 to South America (1860 – 1914) 3.7 14. Russians to North America (1880 – 1914) 2.2 to Western Europe (1880 – 1914) 2.2 to Siberia (1880 – 1914) 6.0 to Central Asia 4.0 15. Japanese to East Asia, South-East Asia & America (1900 – 1914) 8.0

16. Mexicans to North America (1950 +) 9.0 17. Africans to Western Europe (1950 +) 2.0 18. Latin Americans/West Indians to North America (1950 +) 4.7 19. Migrant Workers to South Africa (1950 +) 5.0 20. Egyptians to The Gulf (1980 +) 3.0 21. Indians/Pakistanis to The Gulf (1980 +) 2.5 22. Yugoslavs to Western Europe (1940 +) 3.0 23. Vietnamese/Cambodian Refugees (1975 +) 2.0 24. Afghan Refugees (1979 +) 6.0 25. Indian/Pakistani Refugees (1947 +) 15.0 26. Palestinian Refugees (1947 +) 2.0 27. Jews to German Extermination Camps (1940 – 1944) 5.0 28. Jewish Refugees to Israel & elsewhere (1930 +) 2.0

* Based on Philips Atlas of the World, Reed International Books, London, 1995, p.26

57

Figure 1

*

* Extract from Austin Ranney, The Governing of Men, Holt, Rinehart & Winston, Inc., New York, 1966, p.141

58

Figure 2 *

* Extract from Austin Ranney, The Governing of Men, Holt, Rinehart & Winston, Inc., New York, 1966, p.142

59

Figure 3

60

5. Sub-national Group Formation in International Perspective (September 2014)

Homo sapiens not only comes in many shapes and sizes, but also as members of many diverse group formations – some self-selected and others ascribed by others. In reality, only a handful of countries can claim to be relatively homogenous. Most states, as territorial political entities, face the challenge to meet the expectations and maintain the loyalties of diverse population groups within its boundaries. These sub-national solidarity patterns may be based upon shared religion, language, ethnic identity perceptions of race, class or region – all of which command rivalling loyalties. These differentiators or cleavages act as badges of sub-national cultural identity. They affect the way people think and act about political priorities – particularly about the way they cast their votes. These factors play an important part in political life in New York, London, Brussels, Johannesburg, Lagos, Baghdad or Sydney. Though not entirely discrete, these differentiators bind individuals into group formations and split group formations into separate, often rival, entities.

In today’s world, the problems associated with harmonising intergroup relations as part of the process of modernisation has become a major challenge to policy-makers in many societies. In the quest to find practical answers we need to look beyond the boundaries of single societies.

The Impact of the Nation-State Building Process

The modern state is the most prominent form of political association. Since 1946 the number of sovereign states has soared from 76 to 176. A state can be characterised as a sovereign entity with a prescribed population, fixed geographical boundaries and formal institutions of government which are acknowledged to hold the legitimate monopoly of ultimate coercion and sanctions. But states differ in their historical origin, in the nature of their relationships with other powers, in the characteristics of their constitutions and in the characteristics of race, ethnicity, language or religion which may be explicitly or implicitly asserted as central to the state’s functioning. The origins of today’s nation-states can be classified into various categories.

A first category is made up of lineal descendants of colonial administrative divisions where power devolved to the settler communities. Examples are Latin American states, Southern African states, Australia and New Zealand. The effective political community gradually enlarged itself by absorbing related or adjoining outsiders.

A second category refers to several island republics around the world which were formerly under British, Dutch or French rule, where both metropolitan settlers and indigenous populations were overwhelmed by slave or indentured labour brought from Africa, China or India. These include Haiti, Trinidad, Jamaica, Singapore, Mauritius and Fiji.

A third category is traditional kingdoms which experienced a period of colonial rule. Examples are Nepal, Laos, Morocco, Madagascar, Myanmar, Cambodia, UAE, Tunisia, Burundi, Basutoland, Swaziland and Rwanda. In most cases the traditional monarchy had been associated with the over lordship of a specific cultural group.

61

A fourth group comprises of states born of partitioning during colonial times. These territories owe their boundaries to imperial manipulation and often do not have any historical sanction. Examples of this type of state include most sub-Saharan African states, Indonesia, the Philippines, Syria, UAE, Lebanon, India, Bangladesh and Pakistan.

A fifth category is composed of traditional states which never fell under prolonged colonial occupation. Examples are Thailand, Afghanistan, Iran, Yemen and Ethiopia. In some cases brief occupations led to border alterations (Iran and Ethiopia). The absence of foreign rule in these states has delayed the impact of social change upon customary institutions.

The final category involves residual cases which do not fit into any of the above categories. Examples are components of the former Ottoman Empire such as Iraq and Jordan which were artificially fabricated to reward potentates who rendered wartime services. Petty monarchs were permitted to seize territory eg. the Saudis in Saudi-Arabia and Senussi leaders in Libya. In the case of Israel, ancient history offered a sanction for the geographical zone designated as their homeland. In Liberia and Sierra Leone, returned slaves were arbitrarily located in selected areas.

The biggest challenge states face as territorial political entities is to meet the expectations and to maintain the loyalties of the diverse populations within its boundaries. Because most states do not have entirely homogenous population structures, they have to contend with sub-national solidarity patterns which may be based upon shared religion, language, ethnic identity, perceptions of race, class or region which command rivalling loyalties. Each of these cleavages undermine the cohesion and stability a society needs to advance to higher levels of socio-economic development.

The Impact of Race

Africa is said to be the cradle of mankind where forerunners of homo sapiens emerged from pre- human forms five or six million years ago. Homo habilis preceded homo erectus that spread beyond the African continent. By 10,000 years ago all the modern forms of man were in existence and had spread over most of the globe presenting themselves in a wide variety of physical types. Physical anthropologists distinguish four main “races” of homo sapiens: the Caucasoid (or Indo-European), the Negroid, the Mongoloid (Asiatic, including the American Indians and Polynesians) and the Australoid (Australian Aborigines). In the long history of mankind, modern examples of homo sapiens are a comparatively recent phenomenon. The marked differences between the four main racial categories indicate a fairly rapid divergence in physical types in different parts of the world, presumably due to environmental factors and to isolation.

Today, “perceptions of race” play an important role in producing a keen sense of differentiation among people. The instant recognition of variation in skin pigmentation, hair colour or texture and facial characteristics are key elements used as a basis of social differentiation. Although “race” is today largely discredited as a scientific concept in relation to social differentiation among humans, it is widely used in relation to animals such as cattle, dogs and cats to differentiate among species. The dynamics of cultural pluralism largely rest upon subjective human sentiment – largely removed from the detachment of the scientific laboratory.

Generally speaking it can be said that a sense of pronounced racial identity tends to emerge particularly in multiracial settings. The common sense of being “European” or “African”, “Chinese” or “Indian” only arose in the wake of the creation of multiracial communities since the beginning of the

62 imperial age. Europeans transplanted to overseas locations tended to perceive the indigenous populations as “natives” or “aboriginals”. A set of stereotypes, often pejorative, usually developed, implying a cultural sameness to the indigenous people. The indigenous subjugated people similarly tended to develop a sense of race-based animosity towards the intrusive outsiders. The Zulus thought of white settlers as “driftwood from the sea” and called them “Abelungu” in contrast to the black folks which they called “Abantu” (meaning “people”). The brutal confrontation between Africans and Europeans through slavery and colonial conquest created a sense of hostility among native peoples towards the foreign intruders. Racial separation has often been buttressed by legal and social barriers and was reflected in divergent settlement and land use practices such as the segregation patterns in the USA, Australia and South Africa.

The Impact of Ethnicity and Language

Though not synonymous, ethnicity and language are often linked to describe sub-national or even national group formations. The members of such groups are usually bound by awareness of a common culture, common ancestry, shared historical experience and a common language – even when accents may differ. Members of ethnic groups often express a sense of kinship or give expression to mystic bonds of blood relationship arising out of a common historical experience. There are numerous examples such as the English, Scottish or Irish settlers in the USA, Canada, Australia, New Zealand and South Africa. Proverbially, “speaking the same language” means being on the same wavelength on many issues – as in “singing from the same page”.

Ethnic or linguistic identity has diverse origins. Ethnicity can be described as a form of cultural self- awareness among linguistically related peoples. India, with its 1.2 billion population, is a veritable laboratory of such diversity. Linguistic differentiation largely coincides with caste and religious boundaries. Indian linguistic or ethnic clusters have deep roots in Indian history. Provincial frontiers were often drawn along linguistic lines so that the bulk of schoolchildren receive their education in their own regional language. Hindi is spoken by around 50 percent of the Indian population, but there is little consensus over what Hindi is. Purists demand a return to the Sanskrit sources, while others prefer a standardisation of the Hindi of the marketplace with varying degrees of Urdu admixture. Village Hindi is a diverse series of dialects, often not mutually understood and far removed from the urban towns. Beyond this, there are major sub-categories in Hindi such as Rajasthani, Bihari and Punjabi. It is claimed that marriage or kinship across language boundaries are relatively rare. With linguistic clusters a fact of life, linguistic loyalties have become firmly rooted.

South Africa’s 50 million population is also an example of an ethnic-linguistic mosaic which reflects historically based settlement patterns. There are 11 official languages with “broken English” as the lingua franca. In seven of the nine provinces, the commonly spoken local language is deeply rooted in history, eg. Zulu in KwaZulu-Natal, Xhosa in the Eastern Cape, Sotho in the Free State and Tswana in North West. Gauteng and Western Cape are ethnic “melting pots” where English is to a growing degree accepted as “lingua franca”.

The Impact of Religion

By “religion” is meant any belief system based on the idea that there is an omniscient, supreme (supra- human) deity or intelligence or force equipped with the capability to act as the designer, creator and mover of the entire universe, including everything in it – all natural and moral phenomena. Belief

63 systems sometimes hold tenets which contradict one another. Sometimes divisions within and between religious groups lead to violent conflict and bloodshed.

Most religions are characterised by both dogmatic and ritualistic aspects which find expression in organisational structures such as churches, shrines and priesthoods. The major religions active today made their appearance during the past 3000 years. The most politically relevant religious affiliations today include Islam, Hinduism, Judaism, Christianity and Buddhism, or one of the particular sects or rites of these.

Since ancient times, religion has played a prominent role in the formation and development of communities and societies: - offering an account of the origins and nature of reality and humanity’s relationship with it; - offering a basis for communal identity, social affiliation, cultural cohesion and even territorial attachment; - offering a foundation for moral values such as thinking and feeling about what is right, just, fair, preferable, true and universally compelling; - offering a sense of sacred mission exerting a profound hold upon peoples’ emotions while providing a fertile source of social and political cleavage driven by assumptions of a divine or supernatural imperative.

Religion’s origins are traced by social anthropologists to mankind’s tendency to speculate about things unknown to them. Before the scientific method of inquiry emerged in the middle of the second millennium of the Christian era, people had to rely on other sources of knowledge: their imaginations or illusions, their observations and experiences and the utterings or teachings of persuasive individuals among them. The primordial form of religion was “animism” – the attribution of life and significance to the inanimate – in order to explain mysterious facts around them such as life and death.

Over centuries, the world’s major religions have crossed national boundaries and their followers add up to millions of people – in the case of Christianity and Islam, more than a billion each. Other religions such as Zoroastrianism, Shintoism, Taoism, Judaism and Sikhism are largely local and are inseparably related to specific tradition-bound communities. The Pew Research Centre issued a report in 2010 estimating that around 5.9 billion adults and children – approximately 84 percent of the world population – have some kind of religious affiliation. Even of the over 1 billion persons who are unaffiliated to any kind of religion, many profess some belief in a higher power. The Pew Research Centre states that in 2010, of the 5.9 billion religious believers, the distributions of religious affiliation was as follows: Christian 31.5 percent, Muslim 23.2 percent, Hindu 15 percent, Buddhist 7.1 percent, Traditional 5.9 percent, Jewish 0.2 percent, other 0.8 percent. For Christians the median age was 30 years of age, for Hindu around 25 and for Muslims around 22 years of age. Around one quarter of the world’s believers live as religious minorities.

The profound hold which religion is capable of exerting upon mankind’s emotion and imagination renders religious cleavages particularly intractable. Historical experience has shown that religion can produce both militant identity and a sense of sacred mission. Where religion regards the secular and sacred realms as inseparable, coexistence of different religious communities within the same state is particularly difficult. In some instances religious membership is made highly visible by style of clothing (eg. headscarves, hats, turbans, etc.), rites or festivals, dietary requirements, building styles (mosques and cathedrals), concepts of law (eg. sharia law versus statutory or common law). The Protestant-Catholic cleavage served as example of a classic division within Christianity. Similarly the

64 conflict between Sunni and Shiite Muslim is of prime importance in countries such as Iraq, Yemen, Iran, Sudan, Turkey and Syria. This cleavage still underlies the violent conflict pattern in the world of Islam.

The Impact of Class

Social stratification has been the source of cleavage in many parts of the world. In the Western World it was brought into sharp focus by the Socialist Movement which took root in the wake of the Industrial Revolution of the 18th Century. It was given an ideological foundation in the writings of Marx and Engels which inspired the rise of Communist parties in many parts of the world. Marx and Engels were deeply sensitised by the social distress of the working class. After the publication of The Communist Manifesto in 1848, they became the most compelling prophets of the dispossessed. Their arguments became holy writ for class-struggle action in the destruction of the capitalist class. Marx claimed that the essence of the entire historical process was driven by material forces such as the relationships between capital and labour within the production process. These relationships were marked throughout history by conflict and contradiction between patrician and plebeian, freeman and slave and the perpetual class struggle between bourgeoisie and proletariat. Capitalism is ultimately confronted by a class of exploited workers who are alienated because they are paid less than the value of their work. This class struggle leads to a proletarian revolt which results in the dictatorship of the proletariat which sets up a classless society where the state will wither away.

The revolutionary theories of Marx and Engels reverberated around the world for many generations wherever reactionary regimes stood in the way of underclass aspirations. In several cases it led to the establishment of totalitarian communist systems because the dictatorship of the proletariat that Marx and Engels propagated as a transitory phase evolved into totalitarian communist party dictatorships in Russia, Eastern Europe, China, Cuba and North Korea. The design of a totalitarian dictatorship was haphazardly left to the devices of Lenin, Stalin and Mao Zedong. As subsequent history has shown, they excelled in masterminding instruments of oppressive power and bureaucratic control. The credibility of these totalitarian “classless” societies started to wane with the implosion of the Russian Soviet State in the 1990s and the introduction of free market practices by Deng Xiaoping in China in the 1980s. The remaining class-based Communist regimes only perpetuated the misery of its peoples.

The Impact of Caste

India is a unique laboratory of sub-national diversity: linguistic, religious and caste. These interacting cleavages have all played an important role in defining areas of conflict and have deep roots in Indian history. In some areas caste and language have proved to have similar boundaries and have been mutually reinforcing. Religion divided the population along different lines and tended to overshadow linguistic differentiation.

Around 2000 BC the Aryans, an original Indo-European speaking tribe who lived in the region between the Caspian and the Black Seas, were driven from their homeland by some natural disaster or Mongol invasions. Some tribes moved west across Anatolia and some to the east across Persia (now Iran, a cognate of Aryan) and eventually advanced still further east across the mountains into what is today called India. Today their history is pieced together from the Aryans’ religious “Books of Knowledge” (Vedas), particularly the Rig Veda (Verses of Knowledge) consisting of more than a thousand Sanskrit poems. The Aryan migrants wielded bronze axes as well as long bows and arrows and their literature describes how they conquered the Dasas (dark-skinned slaves). The term “Aryan”,

65 while primarily a linguistic family designation, also had the secondary meaning of “highborn” and “noble”. The foremost Aryan tribe was called Bharata, which was probably the name of its first raja (king). Each Aryan tribe was ruled by an autocratic male raja as each family was controlled by its father – the origin of the patriarchal household. They occupied and settled the catchment area and valley of the Indus River and its tributaries.

The Rig Veda explains that the four great “castes” (varna) of Aryan society emerged from different parts of the original cosmic man’s anatomy: the brahmans issuing forth first, from the mouth; the kshatriyas second, from the arms; the vaishyas third, from the thighs; and the shudras last, from the feet. All castes fell below the brahmans, who alone were associated with the cosmic “head”.

As they expanded eastwards toward the Gangetic plain, the Rig Vedic Indians (descendants of the Aryans) developed a range of communities and interdependent villages. Each varna (class or caste), originally meaning “covering” associated with skin covering and its varying colours had its distinguishing colour: white for brahmans, red for kshatriyas, brown for vaishyas and black for shudras. Acute colour consciousness thus developed early during India’s Aryan age and has since remained a significant factor in reinforcing the hierarchical social attitudes that are so deeply embedded in Indian civilisation.

As more of the pre-Aryan tribal peoples were absorbed within the spreading boundaries of Aryan society, a still lower class was added. It was one whose habits and occupations were so strange and “unclean” that even shudras did not wish to “touch” them. Hence the emergence of those beyond the pale of the four-varna system. They were the “untouchables”, also called “fifths” (panchamas), or outcasts. The actual pattern of social hierarchy that actually emerged varied greatly from region to region as a result of Aryan and pre-Aryan interaction, eg. in South India and Bengal.

Hinduism’s interaction with the caste system is embodied in the fact that Hinduism maintains that all living things have souls, which are only differentiated through karma, or the effects of previous deeds, which conditions successive re-births in different types of body. This is the doctrine of samsara which has given a very distinctive character to much Hindu thought and philosophy. For the religiously minded Indian, the main spiritual quest for at least 2500 years has been to rise above the cycle of transmigration and to achieve union or close contact with the ultimate Being. Various schools of philosophy identified different means of achieving the supreme goal and give diverging interpretations of the experiences of the mystics. The whole life of Hindu is punctuated by ritual acts, ceremonies, sacraments and social customs. These refer to marriage, cremation, animal sacrifices, recitations of verses, sacred images, marriage relationships and ascetic widowhood.

Many trends have appeared in the last hundred years. Many ancient prejudices have disappeared in respect of “untouchables”, divorce and widow re-marriage, taboos and ideas of ritual impurity. Since feeding the hungry and caring for the sick is part of Hindu social conscience, old Hinduism and caste rigidities are gradually giving way to new interpretations by a process of modernisation.

The Impact of Regionalism

Attachment to region is a universal phenomenon. People tend to develop a sentimental attachment to the area where they were born, were raised or where they live. Local areas, towns and cities, provinces or other sub-national regions tend to generate a degree of loyalty separate from identification with a national territory. Scotland or Wales in the UK; in Germany; the German, French and Italian

66 cantons of Switzerland; the Basque and Catalan regions of Spain; the provinces of South Africa; Wallonia and Flanders in Belgium; the Southern or New England States in the USA; Quebec in Canada; Sabah, North Borneo and Sarawak in Malaysia; the trans-Andean Amazonia region in Peru; the Táchira, Caracas and Cumana regions of Venezuela; the gaucho heritage of Rio Grande do Sul is miles away from the carioca of Rio de Janeiro or Brazil – these are all examples of regionalism which manifests itself in cultural and economic ways. Latin America is an example where regionalism became deeply embedded in the larger state system in the early era of caudillos with imperfect or non-existent communications and central administrations with weak jurisdiction-wide capabilities. Regionalism often manifests itself in a sense of cultural and economic distinctiveness. The Flemish think of the Walloons as a community of dole bludgers. The Spaniards of Madrid look at the Basques as a collection of irresponsible troublemakers. Many countries have to contend with regional loyalties that rival that of nationhood.

The Activation of Group Conflict

As Plato observed more than two millennia ago, mankind is a social animal. You become human only in association with others. Hence, societies are in essence clusters of groups which provide the breeding ground for patterns of co-operation but also for conflict. The most important arenas for co-operation and conflict have to do with language recognition, the allocation of positions of power and authority, the allocation of privileges and opportunities, education priorities, allocation of resources and the distribution of tax burdens. These contours of co-operation and conflict are reflected in the patterns of electoral competition. Election outcomes translate the competition between group alliances into ruling majorities or the hegemony of one group over another.

Sub-national solidarity patterns based on race, language and religion especially are capable of generating an intensity of identification which can override all other issues. In some cases they absorb other cleavages and translate them into communal hostility. Racial consciousness, facilitated by its extreme visibility, tends to create its own stereotypes of cultural differentiation. Language, which is the chief medium of social communication, creates networks of intensive social interaction – and also barriers where lack of linguistic fluency creates communication difficulties within a given territory. Religion, by resting on a divine or supernatural sanction, removes differentiation from the everyday level of human debate and rationality. When religion enters the fray, conflict tends to become infused with a supernatural mandate from heaven and is pursued as a holy duty.

When the sources of differentiation intertwine the potential for violent conflict expands. When the clusters of differentiation become highly visible, such as racial features or ritual dress codes or costumes, the conflict potential takes an added complexity. It identifies individuals as part of in-groups (“one of us”) or out-groups (“one of them”). Such individuals are then easily enveloped in an elaborate syndrome of stereotypes. The saliency of distinctness reinforces both the identity with a group and the visibility of its uniqueness with regard to other groups.

Comparative studies have shown that less diverse societies engage in more redistribution and greater provision of public goods such as education and infrastructure. In highly homogenous Denmark, government spending runs to nearly 60 percent of GDP. More diverse America allocates only 39 percent of GDP to government. It is claimed that White Americans, who account for a disproportionate share of high incomes, are less keen on redistribution. Blacks account for a disproportionate share of the poor. In polyglot Singapore only 14 percent of GDP is allocated to government spending.

67

Transfer payments of government revenue between regions can become a serious cause of contention. Scotland is a beneficiary of higher per capita government spending than England in the public finances of the UK. But such redistribution can lead to major party-political realignments. In Belgium the prosperous Flanders tends to look down upon the welfare-dependent Wollonians. In Germany the “Wessis” are still complaining about the high cost of integrating the “Ossis” since the fall of the Berlin Wall.

The 2014 referendum on Scottish independence is bound to have a major impact on voting patterns in the English parts of the United Kingdom. The Scottish Parliament can make laws in all areas, including education, health and welfare policy, not specifically reserved to Westminster by the Scotland Act of 1998. The Welsh, by contrast, can only exercise specifically devolved powers in their Assembly. In Northern Ireland devolution is tied to power-sharing between nationalists and unionists. In 2012 a new Scotland Act granted Edinburgh greater income tax and borrowing powers, due to be transferred in 2015 and 2016. During the referendum campaign that ended in a “no” vote on September 18th, 2014, between nationalists and unionists, the leaders of Conservatives, Labour and Liberal Democratic parties promised the Scots ever greater autonomy within the United Kingdom to encourage them to stay within the United Kingdom. In view of the narrow victory of the “no” vote, the UK party leaders face the awkward situation of having to offer comparable concessions to the “English” part of the UK. UK policy makers are faced with the growing influence of UKIP – the United Kingdom Independence Party. Scotland, Northern Ireland and Wales have their own assemblies which control much domestic policy while England with 84 percent of the UK population is still run from Westminster where English MPs have to share power with those from Scotland, Wales and Northern Ireland. This anomaly is bound to have an important impact on future electoral politics.

Africa, with its more than a thousand ethnic groups and languages crudely lumped together by colonial mapmakers, has been particularly prone to secessionist campaigns. Many countries are artificial units where tribal loyalties overshadow national sentiments. In Nigeria it led to a civil war (1967-70) when the south-eastern province known as Biafra sought to break off – resulting in up to 2 million deaths. Similarly, the efforts of the Congo’s south-eastern province of Katanga to split off, led to years of warfare. In both cases the secessionist campaigns failed.

In other cases, long and bloody civil wars eventually led to internationally recognised separation. Eritrea’s secession from Ethiopia was accepted in 1993. South Sudan was recognised as independent from the Arab north in 2011. In both cases peace accords were approved by referendums.

There are several instances in Africa where separatist sentiments are still actively endangering the maintenance of national territorial integrity. In Mali, the Tuareg people together with assorted jihadists have been kept under control by French troops. In north-eastern Nigeria, the Islamist fanatics of Boko Haram seek to establish their own Islamic caliphate, more or less in line with the Islamic State jihadists in northern Syria and Iraq. In the Western Sahara the Polisario movement, egged on by Algeria, seeks independence for the phosphate-rich coastal strip from Morocco which occupied the strip when Spain left in 1975. The African Union (AU) backs Polisario, but France and other big powers support Morocco. In 2001 the Somalilanders, whose territory was separately administered by Britain before becoming part of Somalia in 1960, voted overwhelmingly in a referendum for independence from the rest of Somalia. (See The Economist, September 27th-October 3rd, 2014, pp.53, 59 & 70)

To cope with the problem of diverging sub-national loyalties, many countries have developed a range of policy alternatives with varying degrees of success: federalism, devolution, special representation,

68 cultural neutralism (sometimes called multi-culturalism), integrative ideology (eg. socialism), assimilation, encapsulation and expatriation (secession or partitioning). These alternatives are aimed at reconciling sub-national diversity with territorial survival. Unfortunately these measures do not guarantee peaceful co-existence. Violence, in a variety of forms, has been present as a political instrument in every major region of the world: assassination, rioting, pillage, guerrilla warfare and endemic civil warfare. When the cleavages active in a society tend to overlap, the conflict potential rises to levels where the preservation of established national co-existence patterns become totally impossible. As the old saying goes: “a house divided against itself cannot stand”. History has bequeathed us with several artificially demarcated state boundaries that include population groups characterised by irreconcilable cleavages.

Concluding Remarks

The process of modernisation has been closely associated with the introduction of technology and scientific patterns of thought in society. It also involved the acceptance of modern forms of social and political organisation.

In the Western World the modernisation process was given strong impetus by three ground-breaking trends in the opening of the European mind: the Reformation (16th century), the Scientific Revolution (17th century) and the Enlightenment (18th century). Much of the classical learning was rediscovered in the Renaissance, but the Reformation was accompanied by the revolutionary role of the printing press which enabled writing in the vernacular rather than Latin. Religious leaders such as Luther and Calvin encouraged believers to read the scriptures for themselves. The pioneering work of Copernicus, Galileo, Boyle and Newton in the field of “natural philosophy” (as science was then known) spilled over into the fields of social science and metaphysics under the influence of many outstanding scholars such as Kant, Schiller, Hume, Locke, Smith, Montesquieu, Voltaire and many others. It involved the acceptance of a wide range of humanitarian and universalistic ideals associated with the Enlightenment. Gradually the unlimited sovereignty of religion was replaced by the rational interpretation of the world of experience.

Modernisation does not necessarily mean a unilinear pattern of change according to a “Western” development model. But it does mean a transformation to improved living conditions, less disease and ignorance and the engagement of the individual members of society in an inclusive process of problem-solving. The most generally accepted perception of the process of development is an economic one, i.e. finding and following a process where a growing proportion of society can share in opportunities to achieve a higher standard of living. In such societies economic growth is supported by modernising sociological and psychological changes involving the transformation from “traditional” to “modern” ways of doing things.

Modernisation involves a set of important changes in the organisation of society and the role played by the individual members. There is a movement away from strict identification with primary groups; from social norms assigning status on the basis of inherited characteristics to achievement-based norms recognising how well a person performs important functions in society. It requires a political order able to cope with the diverse demands and to set the appropriate standards in terms of which the reasonable demands and interests of the component groups can be reconciled and the quality of their lives be improved.

69

Bibliography

Anderson, C.W., Issues of Political Development, Von der Mehden, F.R., Prentice Hall, New Jersey, 1967 Young, C. Bogucki, P. The Origins of Human Society, Blackwell Publishers, Oxford, 1999 Diamond, J. Collapse – How Societies Choose to Fail or Survive Penguin Books, London, 2011 Meredith, M. The State of Africa, Simon & Schuster, London, 2011 Olson, M. The Rise and Decline of Nations, Yale University Press, New Haven, 1982 Schermerhorn, R.A. Comparative Ethnic Relations, University of Chicago Press, 1978 Rostow, W.W. Politics and the Stages of Growth, Cambridge University Press, 1971 Wolpert, S. A New History of India, Oxford University Press, 1993 Wright, R. The Evolution of God – The Origins of Our Beliefs, Abacus, 2009

70

6. The Rise and Decline of the Anglo-Saxon Model (July 2011)

To describe the “Anglo-Saxon Model” accurately is an impossible task. It is essentially a label used by outsiders to describe the predominant socio-economic-political lifestyle of English speaking countries – often used in a pejorative sense by critical commentators. As a genus it can be described as a form of societal organization that originated in the British Isles and spread to other parts of the world with the emergence of the British Empire. It is characterized by its reliance on free enterprise, private ownership, competitive markets, comparatively limited government intervention, a legal system heavily laden with Common Law, representative government and constitutional parliamentary democracy. It does not exist in pure form anywhere, but the species of the genus have evolved in a variety of forms in English-speaking countries.

The English-speaking countries have always been a broad church encompassing the United Kingdom and most of its former colonies: the United States, Canada, Australia, New Zealand and Ireland. The most prominent and trend-setting exponent of this model is the United States of America, but the original fountainhead is the United Kingdom. The original template also left some footprints in South Africa, India, Singapore, Malaysia, Sri Lanka, Pakistan, Nigeria, Kenya, Egypt and even in Zimbabwe, Swaziland and Botswana.

Impact of Migration Patterns

August Comte, a 19th century French philosopher, and one of the pioneers of modern Sociology, said “demography is destiny”. It is certainly, at the very least, highly important as a motor of socio-cultural change. To understand the impact of the migration of millions of people from Europe to the New World or to Australia, New Zealand and also South Africa, it is instructive to imagine what the outcomes would have been without the arrival of the migrants. In the wake of the waves of immigrants came a host of influences: values, belief systems, traditions, ideals, knowledge, skills, practices, institutions and other ways of doing things. The migrants moved for a variety of reasons. Some were driven by religious considerations (either persecution or aspiration), some were forcefully transported away as slaves (or equally pernicious) as exiled “convicts”, but the largest proportion were attracted by the expectation of a better life.

The scale of 17th and 18th century migration from the British Isles was unmatched by any other European country. From England alone, total net emigration between 1601 and 1701 exceeded 700,000. These large movements of population transformed cultures and complexions of whole continents. The fingerprints of the flow of culture and institutions cannot be easily expunged. The scale of British migration can be brought into perspective by considering that the total world population in 1700 stood at less than 1 billion. (See Niall Ferguson, Empire – How Britain Made the Modern World, Penguin Books, 2003, Chapter 2).

In Elizabethan England, a Vagrancy Act passed in 1597 stated that “Rogues, Vagabonds and Sturdy Beggars” were liable “to be conveyed to parts beyond the seas”. Those parts referred to the British colonies in North America. Prisoners condemned to death by English courts could potentially have their sentences commuted to deportation. Some of the deportees were “common criminals”, but political dissidents were also disposed of in this fashion – frequently used as punishment for Irish dissidents. Prisoners were sent to Virginia or Maryland to work on tobacco plantations until the growing number of slaves exported from Africa replaced them.

71

In 1718, Britain passed a Transportation Act, which established a seven-year banishment to North America as a possible punishment for lesser crimes, and also stated that capital punishment could be commuted to banishment. So systemic exile became a part of England’s justice system. It was thought to be advantageous for everybody involved. It was considered more humane than executing or flogging, it “offered” the possibility of “moral rehabilitation” and freedom afterwards, it rid the population of dangerous individuals, it deterred others tempted to commit crimes, it provided workers where there was a great want of servants.

Transportation to North America continued for nearly 60 years, and only ceased when the American colonies revolted in 1776. By that time over 40,000 criminals had been shipped to the New World. This English practice “of emptying their jails into our settlements” was roundly rejected by the colonial elite and ultimately resulted in the Declaration of Independence in 1776 and ultimate formation of the United States of America in 1887. Once America refused to accept further shipments of convicts, England’s prisons began to overflow. They were first accommodated in decommissioned ships (“hulks”) for several decades. The ultimate solution was New South Wales (Australia) after James Cook’s discoveries in the 1770s. The “First Fleet” carrying 736 prisoners (188 women) set off from England in May 1787, took 252 days to reach its destination in Botany Bay, Sydney, in January 1788. The second fleet arrived in 1790 and became known as the “Death Fleet” because the convicts were so badly treated that more than a quarter died during the voyage.

The “Thief Colony” survived and by the 1830s around 3,000 convicts arrived in Australia each year to be assigned to “free” Australians (free settlers) and “emancipists” (those who had served their sentences) as cheap labour. Repeat offenders were sent for severe punishment to Macquarie Harbour in Tasmania and to Norfolk Island in the Pacific. The last convict ship left Britain for Australia in 1868. The records show that by that time 161,021 men and 24,900 women had been sent as convicts to Australia. (See Russell King ed., Origins – An Atlas of Human Migration, Marshall Editions, 2007, pp.95-105). A cursory glance at the pattern of immigration to the major immigrant-receiving countries which were former British colonies, illustrates the scope of the British cultural influence. South American countries, in contrast, illustrate the Latin influence of particularly Spain, Portugal and Italy. Ethnic Composition of Immigration Up to 1940 : Selected Countries USA Canada Australia % % % British 11.1 British 37.0 British 80.5 Irish 11.6 USA 37.0 New Zealand 4.5 Canadian 8.0 Other 26.0 Other European 8.3 German 15.6 Asiatic 3.5 Scandinavian 6.2 Other 3.2 Austro-Hungarian 10.5 Argentina Brazil Russian 8.5 % % Italian 12.0 Spanish 32.2 Portuguese 29.0 Other 16.5 Italian 47.4 Spanish 14.0 French 4.0 Italian 34.0 German 2.0 German 4.5 British 1.0 Other 18.5 Other 13.0

(Figures quoted by Austin Ranney, The Governing of Men, Holt, Rinehart and Winston, N.Y., 1966, p.145)

72

The USA ranks as the major destination by far, not only for migrants coming from the British Islands, but also from other countries on the European continent – particularly in the period up to World War II. With few exceptions, all Americans are descendants of immigrants. Hence, the USA is often referred to as the most obvious example of a “melting pot” nation. But it is of great importance to bear in mind that the receptacle in which all the various immigrant groups were “melted”, was an “Anglo-Saxon” receptacle. The “pot” or container into which the particles were thrown or blended was ladled “Anglo- Saxon”, not “Latin”, “African”, “German” or “Asian”.

Samuel Huntington, in a well-documented publication Who Are We – the Challenges to America’s National Identity (2004) makes the point that America would not have been the country it has been (and in some measure still is today) if it had been settled in the seventeenth and eighteenth centuries not by British protestants but by French, Spanish, or Portuguese Catholics. It would have been closer to Quebec, Mexico or Brazil.

Among the key elements of the Anglo-Protestant founding culture are “the English language; Christianity; religious commitment; English concepts of the rule of law, the responsibility of rulers, and the rights of individuals; and dissenting Protestant values of individualism, the work ethic, and the belief that humans have the ability and duty to try to create a heaven on earth, a ‘city on a hill’”. Huntington acknowledges that that culture has evolved and been amended by the contributions of subsequent immigrants and generations, but its essentials remain. This culture is also the primary source of the political principles of the American creed, which Jefferson set forth in the Declaration of Independence and which has been articulated by American leaders from the Founders to the present day.

From their national beginnings both Canada and Australia have imposed various restrictions on immigration intended to accomplish two main objectives: (1) to maintain the largely British ethnic composition of their populations, and especially to bar Asians; and (2) to attract mainly agricultural rather than industrial workers or members of the learned professions. In pursuit of the first objective Canada set up a system between the two world wars applying to immigrants from all nations except Great Britain and the United States, which distinguished between “preferred”, “non-preferred” and “other” countries. Immigrants from the “preferred” nations (Western Europe and Scandinavia) were admitted on terms similar to those applying to Great Britain. Immigrants from “non-preferred” nations (Central and Eastern Europe) were admitted only if they proposed to be agricultural workers or domestic servants. Immigrants from the “other” nations were admitted only by special permits, which were rarely given. Australia, for decades, pursued the first objective by firm adherence to its “white Australia” policy, which barred Asian immigration. Both nations sought the second objective by encouraging immigration especially from Great Britain (and in the case of Canada, also from the USA), through such measures as maintaining agents abroad and paying part of the passage of desirable immigrants.

Since WWII, however, immigration policies of both nations changed. More immigration has subsequently been allowed from Central and Eastern Europe as well as from Asia. Immigration policies are now based on a point scoring system taking into account the age, qualifications and experience of potential immigrants as well as shortages in the labour market.

The important point to make, however, is that the people who settle a country first leave the biggest imprint. The language of the new nation, its laws, its institutions, its political ideas, its literature, its

73 customs, its precepts – all are primarily derived from the mother country. Hence the British Anglo- Protestant culture defined the new nations more than any other. Its Anglo core predisposed the country to a greater emphasis on property rights and individualism; its Protestant core predisposed it to hard work. The melting pot may subsequently have had more ingredients poured into it. But the pot itself is of a recognisable Anglo-Protestant design.

The UK as Original Template

Wherever the British governed a country, there were certain distinctive features of their own society that they tended to disseminate. Niall Ferguson suggests the following as the more important items on the list: the English language, English forms of land tenure, Scottish and English banking, the Common Law, Protestantism, team sports, the limited or “night watchmen” state, representative assemblies, the idea of liberty. (See Niall Ferguson, Empire – How Britain Made the Modern World, Penguin Books, 2003, Introduction)

The Westminster System

The British system of government dates from the Middle Ages and was gradually transformed from monarchical absolutism, first to limited democracy and eventually to a fully-fledged constitutional democracy based on popular participation. This system became widely known as the “Westminster System” which comprises the following: a hereditary monarch with ceremonial powers as head of state; a parliamentary system of executive power where a prime minister and his cabinet is responsible to a popularly elected parliament within a competitive party system; an independent judiciary, appointed by the head of state as advised by the cabinet; an electoral system based on popular franchise in single member constituencies; public responsibility and accountability by way of free elections at constitutionally based regular intervals; freedom of speech and of political association and activity by individual citizens; and, decision-making by majority vote.

With varying degrees of success the Westminster system has been exported to several other countries. This system pre-eminently took root in former British colonies – with the exception of the USA which adopted a republican form with an elected head of state. In many cases the system was adapted in certain important respects to meet with the requirements of local conditions. The parliamentary system of executive power has been taken over by all countries that still maintain a constitutional monarchy. But several countries have introduced electoral systems based on some form of proportional representation (e.g. Australia).

The unitary system of government has not shown itself to be a suitable answer to the problem of diversity. Also in countries with extensive land areas, such as Australia and Canada, the idea of a centralized unitary state was replaced with a decentralized federal system, such as in the USA. In the case of India, the problem of diversity was dealt with by way of partitioning the pre-independent Indian colony into independent states India and Pakistan. Ireland was also allowed, after a period of violent conflict, to become an independent country, with Northern Ireland remaining part of the United Kingdom.

The Westminster system as such is based on many constitutional conventions which evolved over several centuries. Even in its classic form it is currently undergoing a gradual transformation to accommodate regional sentiments in Wales and growing national sentiments in Scotland.

74

Idea of Liberty

All British imperialists were not liberals: some were very far from it. But what is striking about the history of the Empire is that whenever the British were behaving despotically, there was almost always a liberal critique of that behaviour from within British society. Ferguson claims that “.. so powerful and consistent was this tendency to judge Britain’s imperial conduct by the yardstick of liberty that it gave the British Empire something of a self-liquidating character” (ibid.)

“Anglobalization”

This process is well described by Niall Ferguson’s book, “How Britain Made the Modern World”. It began with the competitive scramble for global markets as British pirates scavenged from the earlier empires of Portugal, Spain, Holland and France. The British were imperial imitators.

British colonization was a vast movement of peoples, unlike anything before or since. Some left the British Isles in pursuit of religious freedom, some in pursuit of political liberty, some in pursuit of profit. Others had no choice, but went as “indentured labourers” or as convicted criminals. An important role was played by voluntary, non-governmental organizations such as evangelical religious sects and missionary societies. All contributed in paving the way for the expansion of British influence. The British came close to establishing the first “effective world government”. This was achieved with a relatively small bureaucracy roping in indigenous elites. The use of military force was a key element of British imperial expansion. The central role of the British navy was evident around the world. First in its private role and later also as transporter of soldiers to the far ends of the world.

Niall Ferguson argues that the British imperial legacy is not just “racism, racial discrimination, xenophobia and related intolerance” as is sometimes claimed, but that there is also a strong credit side: - the triumph of capitalism as the optimal system of economic organization; - the Anglicization of North America and Australasia; - the internationalization of the English language; - the enduring influence of the Protestant version of Christianity; and above all, - the survival of parliamentary institutions.

He then goes on to quote the words of Winston Churchill: “What enterprise that an enlightened community may attempt is more noble and more profitable than the reclamation from barbarism of fertile regions and large populations? To give peace to warring tribes, to administer justice where all was violence, to strike the chains off the slave, to draw the richness from the soil, to plant the earliest seeds of commerce and learning, to increase in whole peoples their capacities for pleasure and diminish their chances of pain – what more beautiful ideal or more valuable reward can inspire human effort?” (ibid.)

The USA as Mankind’s Best Hope

The USA can boast having the world’s oldest continuous democracy. For close to a century it has provided a template for the free world and in many ways also to others.

75

Over the past century it has rescued Europe from total military destruction during the First and Second World Wars and from socio-economic implosion during the Cold War period lasting from 1945 to 1989. Constant opposition from the USA on all important fronts led to the ultimate collapse of the Soviet communist empire.

Early Building Blocks

After Columbus discovered the sea route to the Bahamas in 1492, South and Central America became the possessions of Spain and, to a lesser extent, of Portugal. The northern reaches of the New World became an area of activity for the British, Dutch and French.

John Cabot made landfall on the North-American continent in 1497 and claimed it for the English Crown. But it was only in the 1580’s that England had begun to make plans to colonize North America’s eastern seaboard and its “fair and fruitful regions”. After an exploratory expedition by Walter Raleigh in 1584, the first colonial settlement took place in 1585 at “Virginia”, so named in honour of Elizabeth I, the “Virgin Queen”. The first colonists almost starved and had to be ferried home. The second settlement in 1587 (also on the island of Roanoke) mysteriously disappeared. The third settlement attempt was made in 1607 in Chesapeake Bay under the auspices of the Virginia Company, a pair of English joint stock companies, chartered by the new king, James I. In the course of the next 15 years over 10,000 men and women made the voyage to Jamestown, but by 1622 only 2,000 people were still living in the community – largely as a result of disease and hunger, but also as a result of their own ineptitude and inexperience.

The saving grace for the early settlers in Virginia seems to have been the introduction of a lucrative cash crop – tobacco. The Virginia Company freely gave land to the colonists to grow tobacco. The dried weed was shipped back to England and the profits started flowing – to the colony and to the company. So Virginia became the land of prosperous plantation owners providing a growing market for the slave trade.

At the time when Virginia began to flourish, a different kind of settlement took root in what was later called “New England”. In 1620 a group of religious dissenters, known to history as the “Pilgrim Fathers”, sailed from Plymouth in southwest England to Cape Cod (Massachusetts) and founded a religious colony called “Plymouth” – bound together by their disdain for the hierarchical and ritualistic nature of the Church of England. Their relish for hard work and friendly interaction with indigenous tribes enabled them to survive. In the 1630s they were joined by a great migration of Puritans – a distinct sect within the Protestant fold. Within a decade over 20,000 Puritans travelled from England to New England, where they founded the city of Boston. In time, several other religious splinter groups such as the Quakers and Baptists also migrated to new colonies such as Rhode Island and Maryland.

In 1614 the Dutch founded a trading post on the Hudson River which they called New Amsterdam. But under Charles II, Britain took control over the Dutch possession in 1660 and named it New York and thus united, under one flag and one language, the area stretching from New England in the North to Virginia in the South, which became an important part of the British sphere of influence in the New World.

The millions of immigrants who would travel to the New World during the next three centuries might well have been attracted by the values of the early religious migrants: an instinctive respect for fellow

76 human beings, an idealistic belief in advancement through honest toil, and a refusal to be denied the liberty of conscience. (Russell King (ed), Origins – An Atlas of Human Migration, Marshall Editions, London, 2007, pp.95-99)

Political Credo

The American political credo is essentially based on the political philosophy of the English political philosopher John Locke. Its essence is embodied in the American Declaration of Independence written in 1776: “We hold these truths to be self-evident” “… that all men are created equal”, “… that they are endowed by their Creator with certain inalienable Rights, that among these are Life, Liberty and the Pursuit of Happiness”, “… that to secure these rights, Governments are instituted among men ..”, “… deriving their just Powers from the consent of the governed ..”, “That, whenever any form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government …”.

This declaration ultimately found concrete expression in the American Constitution that was approved by all the federating units in 1787. The essential characteristics of this constitution are as follows: - the protection of civil rights; - popular sovereignty and republicanism; - structural constraints on government power; - federalism; and - constitutionalism.

Thomas Jefferson was the chief drafter of the Declaration of Independence. Together with the Founding Fathers, Washington, Madison, Hamilton and Jay, he also drafted the Constitution and assisted in the propagation of its approval by all thirteen original constituents of the United States of America.

The American Constitution became the template for written democratic constitutions around the world: post-World War II European constitutions, post-Communist East European constitutions, African constitutions and East Asian constitutions.

Uniqueness

Alexis de Tocqueville, a well-known French scholar, who visited the USA in the 1830s wrote “.. everything about the Americans is extraordinary ..”. He was also impressed by the soil that supports them. America has natural harbours on two great oceans, an abundance of every possible raw material, a huge range of farmed crops and is the fourth largest country in the world of which two- thirds is habitable. De Tocqueville also found great distinctiveness in America’s “laws and mores”.

America has the highest per capita income in the world, but also the highest rate of imprisonment in the world. It has more elective offices than any other and also one of the lowest voter turnouts. America has never had a major socialist party, nor a significant fascist movement. It has one of the lowest tax rates among rich countries, the least generous public services, the highest military

77 spending, most lawyers per capita, the highest proportion young people at university, the most persistent work ethic and a strong belief in personal responsibility and self-reliance.

Abraham Lincoln once said America is “.. the last, best hope on earth”. Today, many Americans still support the ideal of America as the best hope of all mankind.

Impact of Constitutional Arrangements

All Anglo-Saxon countries are constitutional democracies. They have sound legal systems providing a secure foundation for economic interaction based on generally accepted business practices; independent judiciaries to guarantee the rule of law; a transparent system of rules; effective business networks and support systems; high levels of educational attainment; an abundance of technocratic skills; extensive managerial skills; reasonably stable industrial and labour relations during the past two decades; respect for law and order – generally speaking; an enabling environment for value- adding business enterprise.

The United Kingdom, Ireland and New Zealand have unitary constitutional arrangements which result in a similar degree of centralization. The USA, Canada and Australia are which facilitate decentralized decision-making adapted to local conditions. In all Anglo-Saxon countries there are vibrant traditions to maintain media freedom. As a consequence, public office holders are held to account by the court of public opinion and the activities of numerous organized interest groups. Elections are held at constitutionally determined regular intervals. Electoral systems are subject to clear-cut rules to ensure transparency, fairness and the legitimacy of election outcomes.

Political Party Rivalry

Political parties are associations formed to influence the content and conduct of public policy in favour of some set of ideological principles and/or interests, either through direct exercise of power or by participation in elections. Some are built on pure ideology (e.g. communism, socialism, liberalism or nationalism) on one end of the spectrum. On the other end the parties are collections of people with specific interests such as power, influence, income, deference, etc. Most parties are “hybrids” combining aspects of both categories and ranging from cadre parties with strong missions to catchall parties acting as brokers for a variety of interest groups. There are also various degrees of affiliation and motivation amongst party supporters.

In the UK, parties date back to the 18th century with the emergence of the “Whigs” and the “Tories”. The “Whigs”, initially under Walpole, consisted of groups of parliamentarians in favour of the extension of the parliamentary sphere of influence and they became the forerunner of what is today called the “Liberal Democrats” in the UK. The “Tories”, on the other hand, advocated the maintenance of the prerogatives of the monarchs and they became the forerunners of the “Conservative Party”. The “Labour Party” emerged as the political arm of the trade union movement and to this day the British Labour Party and the Trade Union Council share their headquarters. In Australia it is customary for all Labour Party parliamentary representatives to be members of trade unions and for the trade unions to dominate the formation of the policies and the financing of the Labour Party.

The “left-right” ideological spectrum in political parties dates back to seating arrangements in the semi-circular French legislative chamber after the French Revolution. The most radical, egalitarian elements sat on the left side, while the more conservative, aristocratic elements, sat on the right side.

78

The more moderate middle-of-the-road groups occupied the centre seats. In the English-speaking world, the left-right spectrum is associated with the degree of state intervention propagated by political parties. To the left are parties that support expansive state intervention in the economy: more regulation and control, increased welfare services, a progressive tax system, higher taxes on the “top- end”, deficit spending and pro-union labour relations. To the right are parties in favour of deregulation, less welfare entitlements, lower taxes, less bureaucracy, more individual choice, supportive of free enterprise.

Left-wing parties such as the Democratic Party in the USA, the Labour Parties in the UK, Australia and New Zealand, normally push the system towards the left of centre. Right-wing parties, such as the Conservative Party in the UK, the Liberal Party in Australia, the Conservative Party in Canada and the Republican Party in the USA, normally push towards right of centre positions.

In the USA, the major political parties, endeavour to draw at least some electoral support from every major ethnic, occupational, religious, economic and educational group in an effort to act as “brokers” for the widest possible support base. Nevertheless, the Republican and Democratic Parties both have had a fairly rigid core of supporters over many decades. The Republicans can traditionally count on the support of the following groups: WASPS (White Anglo-Saxon Protestants), farmers, small town dwellers, professionals and corporate business leaders. The Democratic Party is traditionally supported by trade union members, the working class generally, big city dwellers, minority groups, African Americans, Jews and leftist intellectuals. Historically, the Southern states used to be essentially a Democratic Party power base, but in recent decades it has changed towards the predominant political divisions elsewhere in the USA.

In the UK, Australia and New Zealand, the Labour and “conservative” parties have somewhat more clearly defined ideologies, although they also try to project “broker” party images. Labour Parties tend to derive support (in terms of voting and money) from working class and trade union members, lower status and income persons, teachers and academics. The “conservative” leaning parties normally draw most of their support from higher income groups, business circles, rural constituencies, persons with a protestant religious affiliation and professionals.

The social composition of the leadership and support patterns of political parties in the Anglo-Saxon world does not imply a rigid, confrontational, inflexible contest between the major political parties. The major political parties generally tend to move to centre positions in order to attract the broadest possible support base. Much depends on the political arithmetic applicable at any particular time and place. Political parties have to take into account changing patterns in voting behaviour, in party identification, in issue orientation as well as candidate orientation. The correlates of voting preference in most democratic countries, such as social status, region, religion, gender and age do change over time. In none of the mentioned countries are all the classes, occupations, religious affiliations or other socio-economic characteristics evenly distributed over the whole country. Each party has its enclaves of solid support and its political deserts. In all of the Anglo-Saxon democracies, the unique cross- sectional demands on political parties are reflected in the composition of its leadership. In their party committees and their slates of candidates are the names of Catholics and Protestants, Jews and Gentiles, lawyers, businessmen, farmers, trade unionists as well as teachers.

79

Ideological Rivalries

An ideology can generally be described as a mode of thought or a set of ideas in terms of which a programme of collective action or a state of collective arrangements or affairs are justified. Sometimes it is something people would like to see continued or, otherwise, would like to see come into being. Mostly it is a rationalisation of something visualised like a picture in the mind of a perceived reality, however distorted or idealised.

The two most active ideological debates in the English-speaking world, are the liberal versus conservative dichotomy and the capitalist versus socialist dichotomy. Both dichotomies have deep historical and philosophical roots which require careful consideration.

Liberalism versus Conservatism

The word “liberalism” became known in England in the early 19th century as a description for policies with a strong attachment to liberty or freedom. Because freedom is an equivocal word in the sense that there are different kinds of freedom – hence more than one kind of liberalism. The English concept of liberalism expounded by John Locke and John Stewart Mill is a kind of “Whig” liberalism. It rests on an idea of liberty understood in terms of freedom from state interference in the actions of an individual. Thus Lockean liberalism leans to laissez faire, toleration, natural rights and limited sovereignty.

The continental concept of liberalism as inspired by Jean Jacques Rousseau leans more to ancient Roman notions of libertas as participation in government – not as being left alone by government. As against Lockean liberalism, this kind of liberalism is more interested in the people as a community or nation (res publica). It is not individualist but “étatiste”. These two competing strains of liberalism were often in conflict in political movements in England as well as on the Continent. The “Whig” type of liberalism took its inspiration from John Locke and the “étatiste” liberalism took its inspiration from Rousseau.

Eventually, the Lockean “Whig” liberalism was challenged by a revisionist school called “social liberalism”. Under the influences of the continental “étatiste” brand of liberalism they wanted to enlarge the role of the state as an instrument of social improvement. This meant overthrowing, or at least revising, the traditional Lockean “Whig” definition of freedom as freedom from state interference. T.H. Green, the Oxford philosopher argued that freedom should be understood “positively” – then state measures to promote social welfare and education could be seen as measures to enlarge, not diminish, freedom.

On the English parliamentary scene the Liberal Party was unable to survive the contradictions between new and old theories of freedom. The movement of “social liberalism” carried many of its adherents into the fuller socialism of the Labour Party. While champions of the old Lockean philosophy of freedom found that they had more in common with the Conservative Party.

In the USA there has never been a substantial Liberal Party in Congress, and the word “liberal” has been used mainly as what Maurice Cranston calls “… one vague alternative to such vague appellations as radical, progressive, or even ‘left-wing’”. (See Maurice Cranston, A Glossary of Political Terms, The Bodley Head, London, 1966, p.68)

80

The terminological confusion in the USA stems from the expropriation of the time-honoured word by various political movements. Supporters of Theodore Roosevelt’s “progressive” movement began to use the word “liberalism” as a substitute for “progressivism” after Roosevelt lost on a Progressive third-party ticket. Then in the early 1930s, Herbert Hoover (Republican Party) and Franklin Roosevelt (Democratic Party) fought it out as to who was the “true liberal”! Roosevelt won the election and adopted the term to ward off accusations of being left-wing. Hence he declared that liberalism was “plain English for a changed concept of the duty and responsibility of government toward economic life”. Since the New Deal, liberalism in the United States has been identified with expansion of government’s role in the economy. (See Daniel Yergin and Joseph Stanislaw, The Commanding Heights, Simon & Schuster, New York, 1999, p.15)

Conservatism as a political word came into currency in London during the 1830’s, but what it describes is a much older conscious manner of thinking. It is perhaps the most common of human attitudes and has been turned into a political doctrine because of challenges resulting from radical changes to the status quo brought about by deliberate and conscious manipulation.

Edmund Burke’s Reflections on the Revolution in France, dated 1791, is generally accepted as the first articulate expression of conservative philosophy. Burke saw the present as a continuation from the long and decisive heritage of the past. Men bore a heavy responsibility to conserve much of that heritage as part of a divine order for generations to come. He insisted that social and political life are the outcome of a complex set of powerful forces that cannot be wasted away with grandiose schemes. The conservative, in the Burke tradition, is opposed to any root-and-branch meddling with society, whether from “left” or from “right”. The Conservative Party in Britain, the Republican Party in the USA and the Liberal Party in Australia tend to take a generally conservative attitude conveying a proud attachment to tradition, frequently containing a religious component.

Modern political conservatives support the original kind of economic liberalism based on a limited role for the state, the maximisation of individual liberty, economic freedom, reliance on the market and decentralized decision making. It emphasizes the importance of property rights and sees government’s role as the facilitation and adjudication of civil society. It means less government, not more.

Capitalism versus Socialism

The other active ideological dichotomy which has been debated in the English-speaking world over the past three centuries, concerned capitalism versus socialism. These debates permeated intellectual life and form the underpinnings of political affiliations – left of centre and right of centre.

Adam Smith (1723-1790) is generally recognised as the first economist to articulate the central “principles” on which a liberal economic society are to be based in his famous publication “An Inquiry into the Nature and Causes of the Wealth of Nations”. He offered certain ground rules for economic progress: regulation by competition and the market and not by the state; and, an economic society in which each man, thrown on his own resources, laboured effectively for the enrichment of society. His ideas were later further developed and refined by David Ricardo (1772-1823) and Thomas Robert Malthus (1766-1834). With Adam Smith they formed the founding pioneers of economics as the subject is known in English-speaking countries. They became known as the “Respectable Professors of the Dismal Science”. Ricardo focussed on the factors determining prices, rents, wages and profits. Malthus is particularly famous for his Essay on Population which essentially maintained that the

81 number of people who can live in the world is limited by the number that can be fed – a fact of life that is not properly appreciated even in the 21st century.

Capitalism developed historically as part of the “individualist” movement in 18th century Britain and was transplanted later to North-Western Europe and North America. It is basically characterised by a number of basic traits: individual ownership, market economy, competition, profit. It is also historically connected to individualist democracy – a “one man one vote” system of majority rule seeking what Jeremy Bentham called “the greatest happiness of the greatest number”.

Socialism as a mode of thought stands for a form of society in which the economic activities are deliberately planned and controlled, on behalf of the community as a whole, by the state. It opposes free enterprise in terms of which individual firms compete on their own initiative to supply goods and services. Instead, it aims at (1) public ownership of the means of production; (2) a welfare state to care for the needy; and (3) creating a society of abundance and equality, through collective action based on the “general will” – also rooted in the majority decision.

Triggered by the appalling conditions created by the Industrial Revolution, the origins of socialist thought can be traced to the humanitarianism of Christian ethics – compassion for the downtrodden, the poor, the suffering exploited. In addition, the ideas of Jean Jacques Rousseau and Francois Babeuf in pre-revolutionary France paved the way for the emergence of the idea that the “organic society” binds together the common good of all individuals. Karl Marx synthesized the ideas of Rousseau, Babeuf and others with his own ideas about dialectical materialism, the class struggle and the revolution of the proletariat. Marx, a German Jew born in Trier, spent most of his later life in London where he expanded his theories of “scientific socialism” and wrote Das Kapital and The Communist Manifesto. He unsuccessfully tried to organize the socialists into an international movement with the support of his friend and financier Engels. After his death his ideas inspired Lenin and Trotsky, who ultimately succeeded in 1917 to establish a totalitarian form of Soviet Communism that lasted until 1989.

In England the main exponent of Utopian Socialism was Robert Owen (1771-1858). He was a wealthy industrialist, supporting Britain’s social, political and economic system, but his compassion was deeply moved by the human suffering brought about by the industrial revolution. He was deeply concerned about the wretched condition of his employees and became associated with Jeremy Bentham, a renowned social reformer of the day. Bentham in turn had close contact with other famous names of the day, James Mill and his brilliant son, John Stewart Mill.

Owen was opposed to “dole” programmes – the destitute simply to be given money by government or charities. At his New Lanark mill, Owen introduced new management techniques, raised wages, encouraged trade unions, opposed exploitation of women and child labour, encouraged education and set up a company store selling goods at reduced prices. Within five years New Lanark’s profits improved, illustrating that the welfare of his workers could be reconciled with profitability of the business. Owen is to be considered the true founder of British socialism. He retired at 58 and travelled widely, including to the USA, to propagate his ideas. He later expanded his reforms to include communal living by investing his entire fortune in a project at New Harmony (Indiana, USA). It ultimately ended in failure.

The socialist movement in Britain was particularly enhanced during the last decades of the 19th century in response to the poverty and slums spawned by industrialization and economic crises. Much

82 of this reaction was articulated by the Fabian Society, Beatrice and Sidney Webb and George Bernard Shaw. These influential intellectuals sought to replace the “scramble for private gain” with the incremental instalment of “collectivism”.

In the 1930s British socialists found much common ground with the interventionist reforms of Franklin Roosevelt’s “New Deal”. Some members of the socialist intelligentsia in Britain found much rapport with Soviet communism. Fierce battles were fought to avoid a communist takeover of the British union movement.

War itself had vastly enlarged the economic realm of government. The government essentially took control of the economy and ran it for the duration of the war. The population rallied together and shared the experience of the “stress of total war”, turning the national economy into a common cause rather than an arena of class conflict. Even the royal family had ration books.

These historical currents led to a rejection of Adam Smith, laissez-faire, and the traditional 19th century liberalism as an economic philosophy. The concepts of “self-interest” and “profit” became morally distasteful. Clement Attlee maintained that private profit as motive for economic progress was “a pathetic faith resting on no foundation of experience”. The Labour Party leaders promised to turn government in the post-war era into the protector and partner of the people and take on responsibility for the well-being of the citizens to a far greater extent than had been the case before the war.

The blueprint for the Labour Party strategy was the Beveridge Report prepared by a government- appointed commission during World War II. William Beveridge, a former head of the London School of Economics, set out to slay the “five giants”: want, disease, ignorance, squalor and idleness. The report’s influence was global and far-reaching, changing the way not only Britain, but also the entire industrialised world, viewed the obligations of the state vis-à-vis social welfare. (See Daniel Yergin and Joseph Stanislav, The Commanding Heights – The Battle Between Government and the Market Place that is Remaking the Modern World, New York: Simon & Schuster, 1999, p.24)

Britain’s Post-World War II Left Turn

After World War II the UK took a sharp turn to the left. The Conservative Party under Winston Churchill was voted out in 1945 and replaced by the Labour Party under Clement Attlee, which immediately started a comprehensive interventionist strategy.

In its election campaign the Labour Party undertook to nationalize certain listed industries and services. In each case, it tried to explain why nationalization was necessary. For gas and light, water, telephone and telegraph, and other utilities, the criterion of nationalization was the existence of a natural monopoly. The coal, iron and steel industries were considered to be so sick and inefficient that they could not be put on their feet except through nationalization. The nationalization of all inland transport by rail, road and air was proposed on the ground that wasteful competition would best be avoided by a co-ordinated scheme owned and managed by public authorities. The Bank of England was also proposed for nationalization on the ground that its purpose was so obviously public. After its electoral triumph in 1945 the Labour Party methodically carried out its programme.

The Labour Party also introduced a National Health Service so that health and medical facilities could be made available to every person without regard to their ability to pay. Although no one was compelled to join the NHS, more than 90 percent of the population and of the doctors were brought

83 under the scheme. The Labour Government also set up a comprehensive cradle-to-grave scheme of social security. The system provided protection against sickness, unemployment and old age – supplemented by maternity grants, widows’ pension and family allowances. These programmes were subsequently augmented with the extension of educational opportunities at school and university levels.

In the 1960s and 1970s it became apparent that Britain was suffering from some kind of wasting disease. The British share of the world trade showed a continual decline. In 1955 Britain still generated 20 percent of developed country exports. By 1977 its share had declined to only 10 percent. In 1939 Britain’s standard of living was second in the world only to the USA. By 1980 it was trailing all the European Community countries, as was the productivity of British workers.

The major problem proved to be its strike-prone labour force. Trade Unions could not be sued for breach of contract and wild-cat strikers even continued to receive social security benefits. Industrial plants became obsolete as a result of chronic under-investment. Britain struggled chronically with the weakness of its currency and a deficit in its balance of payments. In a sense the British were fighting over eggs, but neglected feeding the hen.

To make the outlook worse it transpired that Britain, of all the advanced Western states, had the smallest proportion of its young people in secondary and higher education. It also had the smallest percentages in the fields of science and technology. Britain experienced serious problems with its nationalized industries. In contrast to private business where the system of profit-and-loss accounts as well as the bankruptcy law exercises a crude but effective discipline, public enterprises had no comparable control system. If there are losses, no one goes bankrupt. If the enterprise or industry is in the red, management can either increase prices or receive cheap credits or subsidies from government because it has a monopoly. By 1979 the unproductive inefficiency of Britain’s nationalized industries and the disruptive labour relations had reached such a level of unacceptability, that the Conservative Party under Margaret Thatcher came into power.

Thatcherism’s Leadership

By 1976 the entire UK was virtually on the dole, forced to borrow money from the International Monetary Fund to protect the Pound and to stay afloat. As a condition for the loan, the IMF required a sizeable cut in public expenditures. Labour Prime Minister Callaghan told the annual Labour Party Conference: “ We used to think that you could just spend your way out of a recession to increase employment by cutting taxes and boosting government spending … that option no longer exists … and insofar as it ever did, it worked by injecting inflation into the economy”. (Yergin & Stanislaw, ibid, p.104)

By the end of 1979 the country was in crisis. Public-sector employees struck, hospital workers stopped working, medical care had to be severely rationed, garbage was piling up in the streets, striking grave- diggers refused to bury the dead, truck-drivers went on strike, trains ran irregularly, coal miners went on strike. Finally, the Labour Government fell on a vote of no confidence.

What essentially needed to be changed was the political culture of the country. Thatcher, assisted by Keith Joseph, campaigned across the country against all the manifestations of statism and the Keynesian-collectivist mould. The Institute of Economic Affairs (IEA) led by economists Ralph Harris and Arthur Seldon provided much of the research data. The IEA also provided a platform for two

84 ground-breaking economists on the international scene: Friedrich von Hayek from the free-market “Austrian School” and Milton Friedman from the “Chicago Monetarist School”. Hayek called for a shift back from Keynesian macro-economics and the world of the multiplier, to micro-economics and the world of the firm where wealth was actually created.

In their election campaigns Margaret Thatcher and Keith Joseph sought “conviction politics” not “consensus politics”. They set out to challenge the entire consensus upon which the mixed economy rested. They maintained the economy was “over-governed, over-spent, over-taxed, over-borrowed and over-manned”. Joseph maintained: “The private sector is the indispensable base on which all else is built … we need the wealth-creating, job-creating entrepreneur and the wealth-creating, job-creating manager and that … capitalism was the least bad way yet invented”. Even The Economist, at that point, very much part of the predominant mixed-economy consensus, lambasted Joseph as a “Mad Monk”!

When Margaret Thatcher became Prime Minister in 1979, the daughter of a grocery store owner, Alfred Roberts, told the world: “I owe almost everything to my father – particularly integrity. He taught me that you first sort out what you believe in. You then apply it. You don’t compromise on things that matter”. She said her father imparted to her the homilies and examples – about hard work, self- reliance, thrift, duty and standing by your convictions even when in a minority. She went to Oxford University and graduated in chemistry and later also studied for the bar and became a lawyer, specializing in patents and tax.

As Leader of the Opposition from 1974 onward, she became known as a strong free market supporter. She carried a copy of Hayek’s The Constitution of Liberty in her briefcase. During the 1980’s much of Margaret Thatcher’s programme became known as Thatcherism – a combination of privatisation, patriotism, hostility to trade unions and above all a belief in people taking responsibility for themselves instead of expecting the state to take responsibility for them. She laid the foundations for the recovery of the British economic system and supervised momentous political and social changes. She maintained that government was doing too much. She set out to replace the “nanny” state with its “cradle to grave coddling” with the rewards of an “enterprise culture”.

She had to contend with an inflation rate of 20 percent and interest rates of 16 percent, with enormous pay increases promised to public-sector workers, state-owned companies draining money out of the Treasury, monopolistic nationalized industries and monopoly trade unions. Huge and controversial cuts were made in government spending and the programmed public-sector pay hikes had to be rolled back. As a Prime Minister she was as unpopular as any prime minister since the start of opinion polling.

In the general election in 1983, she won with a huge landslide, a 144 seat majority. She was now in a position to pursue a full-blown programme of Thatcherism: a rejection of Keynesianism, a constraining of the welfare state and government spending, a commitment to the reduction of direct government intervention, a sell-off of government-owned businesses, a drive to reduce high tax rates and a commitment to reduce the government’s deficit.

A big hurdle still remained: the overwhelming power of the unions. In 1984 she took on the unions. The unions were, at the time, financed by Libya’s Gaddafi and from the Soviet Union. After a year the miners’ union capitulated and decades of labour protectionism came to an end.

85

The next step was the privatization of the nationalized industries: British Telecom, British Gas, Heathrow Airport, North Sea Oil, British Petroleum, British Steel. In the process the Exchequer Account was filled with billions to reduce government debt. Thatcher believed in privatization to achieve her ambition of a capital-owning democracy – “… a state in which people own houses, shares, and have a stake in society and in which they have wealth to pass on to future generations”. Thatcherites disbelieved in government knowledge, as “… governments enjoy no unique hotline to the future”. State-owned companies proved in practice to be highly inefficient, inflexible, politically pressured to maintain and expand employment far beyond what is needed, unable to resist the wage pressure from public-sector unions and thus becoming major generators of inflation. They piled up huge losses which they solved by “recourse to the bottomless public purse”. What was missing was the discipline of the market’s bottom line.

Margaret Thatcher’s third electoral victory in 1987 was also the beginning of the end of an era. Thatcher saw a new bureaucratic monster rising up in the European Community’s Brussels offices and opposed joining a single European currency. She was accused of domineering leadership and of over- nationalistic opposition to the European Community. Her leadership of the Conservative Party was contested from within and after she withdrew her candidature, John Major succeeded her as Prime Minister.

Margaret Thatcher’s critics described her as self-righteous, rigid and uncaring. But her legacy proved powerful and lasting. Her policies found a strong echo in “Reaganomics”, the pro-business policies of President Ronald Reagan. Most of her policies were emulated throughout the world – particularly by left-wing governments at the time in Australia, New Zealand, Canada and subsequently also in the “Third Way” policies of Clinton, Blair and Brown.

The Enduring American Dream

During the 1980s and 1990s, the USA’s performance on essential economic yardsticks has continued to be strong: growth in output, productivity and job creation. The Economist calculated in 2004 that the average person in the Euro area is still about 30 percent poorer (in terms of GDP per person measured at purchasing-power parity) than the average American. Europeans are still stuck with lower living standards than Americans. But Olivier Blanchard of the IMF claims that Europeans have used some of their increase in productivity to expand their leisure rather than their income. Americans, by contrast, continue to toil longer hours for more income so who is really better off? The average American worker clocks up 40 percent more hours during his lifetime than the average person in Germany, France or Italy.

Values and Belief Systems

A large proportion, a remarkable 80 percent, claim to be “traditionalists”, holding “old-fashioned values” relating to family, marriage, patriotism and religion. At the other end of the spectrum are the “secular rationalists” who are less concerned about religion, patriotism and who are predominantly single, tolerant, hedonistic, secular and multicultural. According to Gertrude Himmelford in One Nation, Two Cultures, these value judgments are better predictors of political affiliation than wealth or income. In the 2000 election 63 percent of those who went to church once a week, voted for George Bush and 61 percent of those who never went, voted for Al Gore. Traditionalists live largely in the “red” states encompassing the mid-west and the south. Secularists live on the densely populated Pacific coast, the eastern seaboard and the north-eastern and upper mid-western “blue” states.

86

Traditionalists are heavily concentrated in smaller towns and rural areas. Secularists dominate big cities. Multiculturalism is deeply entrenched in “blue” states.

The states with the highest levels of immigration of Latinos and Asians include New York, New Jersey, New Mexico and California. These are considered to be the new “melting pots” and they tend to vote predominantly for the Democratic Party. White immigrants tend to be more conservative.

Roughly 40 percent of Americans describe themselves as politically moderate and less supportive of the death sentence. There is also growing acceptance of gays and inter-racial dating. But America is still the most religious rich country in the world.

Over 80 percent of Americans say they believe in God and 39 percent describe themselves as born- again Christians. Over 58 percent think that unless you believe in God you cannot be a moral person. About 30 percent of Americans are said to be evangelical Protestants, but an equally large proportion of the Christians are Catholics.

Civic Culture

De Tocqueville noted in the 1830s that “… Americans of all ages are forever forming associations”. He argued that these civic associations connected people to each other, made them better informed, safer, better able to govern themselves, and create a just and stable society. Fraternities and sororities on campuses, the League of Women Voters, scouts and girl guides, farmers associations, industrial unions, volunteers, a variety of clubs (social, recreational, professional), etc., all provided reinforcement for civic participation, reinforcing the ideal of good citizenship and engagement with public life.

Churches are also recognized as archetypical civic clubs because congregations meet regularly face to face. All available information point towards church attendance rising. There are also several examples of the trend towards “mega-churches”, i.e. weekly congregations of 10,000 and more. Old hymns, pulpits and even church-like buildings are replaced by churches that look like schools with information booths, food courts, Powerpoint presentations, small group discussions and social activism.

Peter Drucker, the veteran management writer observed that the most significant organizational phenomenon in the first half of the 20th century was the company. In the second half, it was the large pastoral church. Today it is the mega-church that stands out as the provider of community bonding.

Growth and Dynamism

Few countries can remotely equal the growth patterns and dynamism of the USA. The quality and quantity of intellectual life has been the highest in the world for many decades. A quarter of American adults have a university education. The country produces one-third of the world’s scientific papers, employs two-thirds of the world’s Nobel-prize winners, has 17 of the top 20 universities (as ranked by Shanghai’s Jiao Tong University).

The country’s size and wealth, combined with its meritocratic traditions and technological prowess, have made it easier for Americans to explore new opportunities – to voluntarily move to where the jobs are. In a typical year 40 million Americans will move house – particularly footloose young graduates showing a frontier spirit.

87

As a dynamic society, America is remarkably open to trade. Trade barriers are low and the USA has for decades served as the largest market in the world for major exporters like Japan, Taiwan, China and European countries. Without the American market, these countries would not have been able to grow so spectacularly since World War II.

But it is also remarkably open to immigration. Since 1990 about 16 million people have entered the USA legally. Its ability to absorb so many newcomers is astounding. The country is home to between 30 and 40 million people who were born abroad – about one in ten. It is not clear how many of these are legal migrants. This trend arouses anxieties – particularly about the Hispanic wave rolling in from Mexico. On the one hand is the “Anglo-Protestant-African-Catholic-Indian-German-Irish-Jewish- Italian-Slavic-Asian part of society”. On the other hand is the Spanish-speaking sub-group which is growing unusually fast and is continually fed by the country next door through a porous border. Moreover, the majority of Hispanics are under 30, which implies high birth rates.

Multiculturalism

The Census Bureau forecasts that by 2050 the Hispanic population will have increased by 200 percent, the population as a whole by 50 percent and the whites by 30 percent. Hispanics will then constitute 25 percent of the total population and be concentrated in New York, Los Angeles, San Francisco, Chicago and Miami.

Ethnic Composition of the USA Population (Population %) 2004 estimate 2030 projection (300 million) % (364 million) % Hispanic 14 Hispanic 20 Black 12 Black 14 Asian 4 Asian 6 Other 3 Other 2 White non-Hispanic 67 White non-Hispanic 58

Several studies have found, however, that Hispanics, just as other earlier waves of immigrants, are gradually assimilating the American culture and language and moving into the mainstream. There is also evidence of growing inter-marriages between Hispanics and other settled communities. American mixed marriages, which were one in twenty-three in 1990, have subsequently increased to one in fifteen. Nearly half of all the 3.7 million inter-racial marriages in the USA, today have one Hispanic partner. The danger of America becoming a “cleft” society seems to be receding.

In view of the large inflow of non-Protestant and non-British immigrants, it is to be expected that America’s founding Anglo-Protestant culture will undergo profound changes in coming years. The definition of “Americanism” has already been broadened. Many Catholics dislike being told that Protestantism is the source of the country’s dynamism. Likewise the rapidly expanding assortment of non-Anglos dislike any hint that they are less than fully American. But the “melting pot” itself, is still of a recognisable Anglo-Protestant design.

88

New Stratification Patterns

The idea that anything is possible if you work hard enough, is an enduringly popular part of the American Dream. But can the ladder of success be climbed by all?

During the past three decades the rich have been doing better than the less well off. Since 1979 median family incomes have risen by 18 percent, but the incomes of the top 1 percent (Wall Street financiers?) have gone up by 200 percent. In terms of the total national income the bottom fifth’s income has declined in comparison with the top fifth. The top 0.1 percent of Americans also earn two to three times as much as their peers in Britain and France. However, analysis of the position of the poorer echelons in relation to the upper echelons, not in relative terms but in absolute terms, indicates that a smaller share of the total population is living in poverty than before. As John Parker wrote in The Economist of July 16th, 2005, “… The rising tide has lifted dinghies as well as yachts.”

Americans seem to mind more about equality of opportunity then equality of results. Most Americans feel their chances of moving up a notch have improved over the past 30 years and say that their standard of living is higher than that of their parents. But social mobility is eroded by fundamental changes in the economy. This is brought about by increased rewards for intellectual skills. As a result the income gap between college graduates and those without degrees has constantly increased over the past 30 years. Lifetime employment is out and job-hopping is in. Today almost all chief executives have a higher college degree such as an MBA. Persons with a university degree are more likely to move up the income bracket than those without.

These changes indicate growing trends of stratification based on education: an education-based meritocracy. But the rise in the cost of education has put “Ivy League” universities out of reach of most middle-class and poor families. The median income of families with children at Harvard is $150,000 p.a. The trends are that students from the richest quarter of the population are increasing their share of places at America’s elite universities.

But even outside elite schools, students from poor backgrounds are losing ground. The underlying causes are not easily neutralised because it deals with family behaviour. Isabell Sawhill of The Brookings Institution argues that a person’s chances of a good education, good job and good prospects – i.e. of moving upwards – are partly determined by family behaviour. On this view, the rich really are different, not just because they have more money. It also depends on the structure of the family itself. In other words, class stratification is more than a matter of income or inherited wealth. Children from stable, dedicated families, on average, have a better chance to succeed than children who are neglected by parents, who are out of wedlock and without jobs. If the key to upward mobility is finishing your education, having a job and getting and staying married, then the rich in stable families start with advantages that go beyond money.

Population Trends

While the populations of many countries in Europe and Japan are ageing and on the verge of shrinking, the USA population is gaining annually by almost 10 percent – a rate that is adding the population of Chicago every year. Immigration levels over the past 20 years have constantly risen to just over 1 million per annum. The bulk arrive from Latin America and Asia and they tend to have children at a far more prodigious rate than either white or black groups. The median age of the USA sustained by this young and fertile immigrant population will remain at 35 for the next 50 years in contrast to the

89

European median age where it will rise from 41 to 53. In the second half of the 21st century, whites in the USA of European origin will be in a minority.

America’s fertility rate is 60 percent higher than Japan’s and 40 percent higher than the European average. It is taking in immigrants at a faster rate than Europe and doing better in assimilating them. America will be the only big developed country where children outnumber pensioners and one of the few developed countries where the working-age population is still growing. America is likely to remain relatively young and dynamic.

In 2008 the American people elected Barrack Obama as President. He is the son of a black African student and a white American mother and was raised by his maternal grandparents in the state of Hawaii. He was trained as a lawyer at Harvard University and became a community organizer in suburban Chicago before he was elected for one term as a senator for the state of Illinois. He chose as his Vice-President, Senator Hugh Biden, a Jewish American. The year 2008 also marks the start of the Global Financial Crisis which was precipitated by the excesses of the cronies of the financial world of New York. The melting pot has been given a new stir.

Recent Trends

The last decade of the 20th century and the first decade of the 21st century have been characterised by the continued momentum of the free enterprise model. Some countries, such as Russia, Indonesia and Japan experienced steep downturns. But the world economy powered on – largely in the wake of the continued growth of the American and European markets. The biggest beneficiary was the emerging enterprise economy of China.

The Third Way

The United Kingdom has been governed by “Third Way” policies since 1997, which, in essence, were a continuation on the path already marked out by Thatcherism. Even The Economist pronounced Mr Blair, after a year in office, “as proving to be a pretty good Tory after all”. New Labour in power proved to be Tory government by other means. The only difference appeared to be one of style. Margaret Thatcher was forced to be confrontational and divisive in order to change the political culture. Tony Blair could afford to be smooth, consensus-seeking and inclusive, especially serving as a good marketing device for Thatcherism. Old Labourites started accusing Mr Blair as being merely a Tory in disguise.

The essential point to make is that policy-making in the United Kingdom has become less ideologically driven. It has become more pragmatic in the sense of seeking evidence-based policy options that work. Whatever works is good. The left seems to have liberated itself from outdated preconceptions. Tony Blair and Gordon Brown have been instrumental in modernising the political approach of the British Labour Party. They realised that economic growth and wealth creation are the key to national prosperity – without which Labour’s traditional social concerns could not be addressed. They realised that competition is the key and supported public-private partnerships and competition in public service delivery.

Blair also modernised the Labour Party’s approach to electoral campaigning by reliance on consultant- inspired initiatives – to some critical observers over-reliance on gimmicky political spin. It is a perfectly legitimate concern for political leaders to market their political messages. But a problem

90 arises when a successful PR strategy outpaces the implementation of a well-managed quality service delivery. This leads to a gaping mismatch between promises and achievements. The essence of responsible statecraft is to achieve the best outcomes with the means available.

Failure of Compassionate Conservatism

The George W. Bush era in the USA, is characterised by the globalization of American party policies. Since the start of the 21st century the Democrat versus Republican contest has been screened on the world monitor, particularly by CNN and the BBC with a relentless intensity. The contest has been displayed like a soap opera in every “international” hotel room from East to West and throughout Europe, Africa and the Arab world. The script of the serial has been dutifully written by the New York Times and its associated publications.

The skirmish began with George W. Bush’s controversially narrow electoral victory in 2000, then highlighted by the 9/11 disaster in New York and then serialised during the “Neo-conservative” attack on Iraq. Film makers like Michel Moore provided the short-clip trailers on the underlying oil contract and Halliburton conspiracies. George W. Bush was repeatedly portrayed as a miss-speaking, stupid fumbler and turned into the laughing stock of the world. In retrospect it is clear that George W. Bush was not equipped with the mass media potential of a Barrack Obama. He was destined to fall victim to the demands of the changing times. The mantra of “compassionate conservatism” never had the potential to capture the imagination of the new “affluenza” infested, cyber-connected, new technology- powered generation. The critical mass has shifted away from anything that appears “old hat” to anything that sounds “new-fashioned”.

The financial crunch that was triggered by the collapse of Bear Sterns in 2007 is also attributed to the failure of the Bush era. In reality the groundwork of the credit crunch was laid by policy choices to increase home-ownership during the Clinton era when Fannie Mae and Freddie Mac were pressured by the Democrats in Congress to grant home loans to non-creditworthy borrowers. The monetary policy that failed to adjust interest rates appropriately to control asset price bubbles was left in the hands of the Federal Reserve. The abuse of the financial regime was done by the financial fraternity in Wall Street.

In the UK the Conservative Party under the leadership of Mr Cameron also flirts with the concept of “compassionate conservatism” in order to provide philosophical underpinning to Cameron policies. It is also meant to tune into the popular frame of mind to be “open minded” and to show compassion to society’s unfortunates. But Mr Cameron’s camp is keen to show that Conservatives are sceptical about the role of the state. Conservatives do not believe that every problem can or should be solved by state action through a series of “top-down” initiatives. Rather, it should be dealt with by the “connected society” based on culture, identity and belonging, linked together by decentralised intermediate institutions called the “third sector” – an umbrella term for voluntary, not-for-profit and charitable organisations. These institutions should be called upon to take over the delivery of many state services, especially in extending help to the victims of “state failure”. They are particularly keen on harnessing the creativity and energies of the “third sector” in delivering programmes to get people off long-term incapacity benefits. In this way Conservatives are trying to nibble at the edges of state activity and to confront the immense power of the public sector unions.

91

The Aftermath of the Global Financial Crisis of 2008/09

The onset of the 2008/2009 GFC unleashed a climate of apprehension. A variety of analysts, commentators and opportunists tried to find the culprits or to allocate the blame for the economic catastrophe.

A Chinese leader remarked that “the teachers have some problems”. France’s Sarkozy was more populist with his finding “laissez-faire is finished”. The Australian Prime Minister, Kevin Rudd, also jumped on the bandwagon and blamed “… the prevailing neo-liberal economic orthodoxy of the past 30 years”. He roped in a team of advisors (“and others with a common interest in the ideological origins of the current crisis”) to outline a neo-socialist big government strategy to lead the world into a new utopia with a properly constituted and properly directed formula to achieve “… the common good, embracing both individual freedom and fairness, a project designed for the many, not just the few”. (Kevin Rudd, “The Global Financial Crisis”, The Monthly, February 2009, p.29)

The claim that the Anglo-Saxon system has failed (also called the “Washington consensus” or the “neo- liberal economic orthodoxy”) is more of a journalistic or political opportunistic statement than a serious intellectual argument. What is implied is that the policies of deregulation and privatisation have actually brought the world economy to the brink of disaster. This brings into question arguments in favour of market solutions in all fields including health, education and international trade.

In reality overall lowering of barriers to trade has delivered progress on a dramatic scale over the past three decades. Hundreds of millions of people have been lifted out of absolute poverty. The attack on deregulation is also ill-advised unless specific areas of regulatory action are discussed. No responsible person would deny the need for regulating the world of finance because it has always been prone to panics, crashes and bubbles since the early days of international banking. Governments have always been involved in the regulation of financing: prescribing capital cover, monitoring risky strategies, punishing scams, etc. But in recent years financial innovation outpaced the rule-setters. Derivatives, such as credit-default swaps flowed undetected into the international financial circuit. The imbalances caused by China’s decision to hold down its exchange rate sent a wash of capital into the American market. The crisis was as much caused by policy mistakes as by Wall Street’s excesses.

The Economist of October 18th, 2008, gave cogent expression to the current dilemma in the following words: “Heavy regulation would not inoculate the world against future crises. Two of the worst in recent times, in Japan and South Korea, occurred in highly rule-bound systems. What’s needed is not more government but better government. In some areas, that means more rules. Capital requirements need to be revamped so that banks accumulate more reserves during the good times. More often it simply means different rules, central banks need to take asset prices more into account in their decisions. But there are plenty of examples where regulation could be counter-productive ….

Indeed, history suggests that a prejudice against more rules is a good idea. Too often they have unintended consequences, helping to create the next disaster, … If regulators learn from this crisis, they could manage finance better in the future. Capitalism is at bay, but … for all its flaws, it is the best economic system man has invented yet.”

92

Concluding Remarks

In his provocatively challenging tome, Civilization – The West and the Rest (Allen Lane, London, 2011), Niall Ferguson claims that, in practice, most of the world is now integrated into a Western economic system in which, as Adam Smith recommended, the markets set most of the prices and determine the flow of trade and division of labour, but government plays a role closer to the one envisaged by John Maynard Keynes, intervening to try to smooth business cycles and reduce income inequality. Most people now accept the great scientific truths revealed by Newton and Darwin. These attributes have been developed in close interaction with continental European contributions. It led to the development of the intellectual culture of autonomous intellectual inquiry, the scientific method of verification and the rationalisation of research and its diffusion. This mode of operation was allowed to flourish by the support of sound or good institutions: sound financial intermediaries and good government. Institutions are the product of culture. They formalise a set of norms determining what is conducive to good outcomes, rather than bad.

The economies of the United States and Western Europe are now facing the real prospects of being overtaken by China within a decade or two. Not only is Western “hard power” struggling in the Muslim world, but so is the Attlee consensus of collectivist provision in all social-democracies just as the “Washington consensus” on free market economic policy disintegrates.

Ferguson argues that the financial crisis that began in 2007 seems to indicate a fundamental flaw in the heart of the consumer society, with its emphasis on debt-propelled retail therapy. The Protestant ethic of thrift has all but vanished. Deferred gratification used to be described as the corollary of capital accumulation. He quotes Max Weber saying that: “Protestant ascetism works with all its force against the uninhibited enjoyment of possessions; it discourages consumption … And if that restraint on consumption is combined with the freedom to strive for profit, the result produced will inevitably be the creation of capital through the ascetic compulsion to save”.

The United States and much of the Western World have, for the past few decades, lived through an experiment in “capitalism without saving”. Families were enticed to not only consume their entire disposable income but also to draw down the equity in their homes. The decline in thrift turned out to be a recipe for financial crisis. Those who borrowed more than the value of their assets lost everything; those who invested in securities backed by mortgages suffered large losses; banks that borrowed heavily to invest in securities suffered first illiquidity and then insolvency; to avert massive bank failures governments stepped in with bailouts; and a crisis in private debt mutated into a crisis of public debt. Variations on the same theme were played out in all English-speaking countries. It was a crisis caused by over-consumption and excessive financial leverage.

93

Bibliography

Ferguson, Niall (2003) Empire – How Britain Made the Modern World, Penguin Books, London Ferguson, Niall (2011) Civilization – The West and the Rest, Allen Lane, London Galbraith, J.K. (1958) The Affluent Society, Houghton Mifflin Company, Boston Granston, M (1966) A Glossary of Political Terms, The Bodley Head, London Henderson, David (1999) The Changing of Fortunes of Economic Liberalism Institute of Economic Affairs, London Huntington, Samuel P. (2004) Who are We – The Challenge to America’s National Identity, Simon & Schuster, New York King, R. (2007) Origins – An Atlas of Human Migration, Marshall Editions, London Lowi, R. (1976) American Government – Incomplete Conquest, Dryden Press, Hinsdale Ranney, A. (1975) The Governing of Men, Dryden Press, Hindsdale Rossiter, C. (1953) Seedtime of the Republic – The Origin of the American Tradition of Liberty, Harcourt, Brace & World, New York Wolfe, Alan (2008) The Future of Liberalism, Knopf, New York Yergin, D. & The Commanding Heights – The Battle Between Stanislaw, J. (1999) Government and the Market Place That is Remaking the Modern World, Simon & Schuster, New York The Economist A Survey of America 2003 A Survey of America 2005 Special Report on America and the World 2008

94

7. The British Political Economy in the Post-Colonial Era (June 2013)

Britain had been the first Western country to industrialise, a process that started on a large scale around the 1760s. In the two centuries that followed, it was the only major industrial power which had not suffered a convulsive revolution, foreign conquest or civil war – such as France and Germany. It was based on a Common Law tradition, arbitrated by judges, which upheld rights of liberty and property and the whole legal framework within which the British created the first modern industrial society. This continued throughout the nineteenth century as an effective legal setting for vibrant economic development. However, during the second half of the 20th century, Britain’s fortunes started to decline, only briefly interrupted by a flare up during the Thatcher years, before it returned to its downward trend.

The Legacy of the Colonial Era

For more than two hundred years, Britain played a central role in the financing and banking activities of the English-speaking world. By the beginning of the 20th century, its overseas investments exceeded those of France and Germany combined – most of it in the USA, Canada, New Zealand and Australia, but substantial proportions also in Latin America, Asia and Africa. No major economy before or since has held such a large proportion of its assets overseas. In 1914 only around 6 percent of British overseas investments were in Western Europe. Around 45 percent were in the United States and the major English-speaking colonies. As much as 20 percent were in Latin America, 16 percent in Asia and 13 percent in Africa. The bulk of its African investments were in South Africa. The bulk of the Latin American investments were concentrated in Argentina and Brazil.

Britain’s overseas investments proved to be highly lucrative – giving higher returns than those from domestic manufacturing. The earnings from existing overseas assets consistently exceeded the value of new capital outflows. Between 1870 and 1913, total overseas earnings amounted to 5.3 percent of GDP a year.

For many decades Britain was the major promoter of the principle of free trade. By the late 19th century, around 60 percent of British trade was with extra-European partners. This meant that on top of her huge earnings from overseas investment, other foreign earnings handsomely enhanced her balance of payments accounts: “invisible” items like insurance fees, shipping charges, commissions and agency fees. These capital flows enabled Britain to import much more than she exported. Between 1870 and 1914, the terms of trade moved by around 10 percent in Britain’s favour.

Britain also set the standard for the international monetary system by coupling the value of its paper money to a fixed gold standard. By 1908 only a handful of countries were outside the gold standard. The gold standard had become, in effect, the global monetary system – in all but name it was a sterling standard.

The Two World Wars and the Depression

The first major crisis facing the British economy was brought about by the First World War. A total of 702,410 were killed of whom 37,452 were officers. After the war there was a brief post-war recovery but the evidence of industrial decline was obvious, particularly the fundamental weakness of Britain’s traditional export industries – coal, cotton and textiles, shipbuilding and engineering. They were

95 saddled with old equipment, old animosities and work practices, low productivity and chronically high unemployment. Around 1921 the economy experienced a painful downturn and contracted by around a fifth in a single year. Britain resumed her role as the world’s banker, but paying for the war led to a ten-fold increase in national debt.

Then came the American-led boom period of the “roaring twenties”. The American President, Calvin Coolidge, and the ascent of “Americanism” influenced much of the English-speaking world to accept limited government intervention and to create expanded scope for private enterprise. The prosperity was widespread, but it excluded certain older industrial activities, such as the textile industry. The expansion expressed itself in spending and credit. Millions acquired insurance, investment in shares and the benefits of a building boom. The focus of the consumer boom was in personal transport: the automobile. It brought freedom of medium- and long-distance movement for millions. The middle class was also moving into air travel. The electricity industry further fuelled the prosperity of the Twenties. With relative speed, industrial success transformed luxuries into necessities and spread them down the class hierarchy. The affluence of the twenties also played a large role in the decline of radical politics and their union base. The speculative boom of the 1920s ended in the wake of the Wall Street Crash of October 1929. Due to the paucity of relevant data, it was not clear initially how bad things were, or how bad they were going to get. In January 1932, around 3 million people, close to 25 percent of the workers, were out of work. The impact of the Depression on Britain was nevertheless milder than in the USA and Germany. Britain had gone back onto a gold standard and thus elevated sterling into the world’s largest system of fixed exchange rates. Chamberlain adopted a system of “imperial preference” (preferred tariffs for colonial products) in 1932, which boosted trade within the Empire.

During World War II Britain spent $30 billion, a quarter of its net wealth, on the war effort. Foreign assets worth $5 billion were sold and foreign debt obligations of $12 billion were accumulated. In March 1941, the US Congress enacted the Lend-Lease Act which permitted the President to “sell, transfer, exchange, lease, lend or otherwise dispose of material to any country whose defence was deemed vital to the defence of America”. This enabled Roosevelt to send to Britain unlimited war supplies without charge. In practice Britain continued to “pay” for most of her arms by surrendering the remains of her export trade assets to the USA. By 1945 exports were less than a third of the 1938 figure. In 1946 Britain spent 19 percent of her GNP on defence and large amounts on international relief programmes. It had to rely on large post-war loans from the USA.

The Rise of Trade Union Power

In 1900 the trade unions created the Labour Party to promote legislation in the direct interest of labour and to oppose measures having an opposite effect. The unions owned the Labour Party. They directly sponsored a hard core of Labour MPs and contributed about three-quarters of the party’s national funds and 95 percent of its election expenses. The party constitution, by a system of union membership affiliations expressed in block votes, made unions the overwhelming dominant element in the formation of party policy.

In 1906, Parliament passed the Trades Dispute Act which gave unions complete immunity from civil action or damages which could be alleged to have been committed by or on behalf of the trade unions. In effect, it made unions impervious to actions for breach of contract, though the other parties to the contract, the employers, might be sued by the unions. It made a trade union a privileged body exempted from the ordinary law of the land.

96

The Trade Union Act of 1913 legalised the direct spending of trade union funds on political objectives (i.e. the Labour Party) and laid down that union members with other party affiliations had to “contract out” of their political dues. In the 1970s, growing union power was exerted in various ways. The unions introduced new forms of “direct action”, including “mass picketing”, “flying pickets” and “secondary picketing”. They used these devices to destroy a Conservative Party government in 1974. The ensuing Labour Party government pushed through Parliament a mass of legislation extending union privileges: the Trade Union and Labour Relations Acts of 1974 and 1976 and the Employment Protection Acts of 1975 and 1979. These acts extended the immunity of unions and obliged employers to recognise unions, to uphold “closed shops” and to provide for facilities for union organisation. The effect of this legislation was to increase the number of “closed shop” industries and to push unionisation above 50 percent of the work force. It also, in effect, removed all constraints on union bargaining power.

In the early months of 1979, under chaotic leadership, the uninhibited unions effectively destroyed their beneficiary, the Labour government. Its Conservative Party successor under Margaret Thatcher introduced several abridgements of union privileges in the Employment Acts of 1980 and 1982. Excessive union legal privilege and political power contributed to Britain’s slow growth in several ways: it promoted restrictive practices, inhibited the growth of productivity and so discouraged investment; it greatly increased the pressure of wage inflation, especially from the 1960s onwards; and had a cumulative tendency to increase the size of the public sector and government share of the GDP.

Britain had traditionally been a minimum-government state. The census of 1851 registered less than 75,000 civil employees, mostly customs, excise and postal workers, with only 1,628 manning central departments of civil government – at a time when the corresponding figure for France (1846) was 932,000. In the century that followed, the proportion of the working population employed in the public sector rose from 2.4 percent to 24.3 percent in 1950. In the period when the Labour Party was in power, the proportion of the GNP accounted for by public expenditure rose to 45 percent in 1965, 50 percent in 1967, 55 percent in 1974 and 59 percent in 1975.

After the Conservative Party won the election in 1979, public borrowing and spending was significantly restrained. This financial discipline combined with the impact of the North Sea offshore oilfields (which made Britain self-sufficient by 1980 in oil and an exporter in 1981), stabilised the economy and raised productivity to competitive levels. By 1983 the British economy was slowly starting to recover. (See Paul Johnson, 1983, A History of the Modern World – from 1917 to the 1980s, London: Weidenfeld and Nicolson, pp.600-604)

Labour’s Leftist Big Government

After World War II the UK took a sharp turn to the left. The Conservative Party under Winston Churchill was voted out in 1945 and replaced by the Labour Party under Clement Attlee, which immediately started a comprehensive interventionist strategy.

After its electoral triumph in 1945 the Labour Party methodically carried out its nationalisation programme and laid the foundations of Britain’s welfare state. Much of the welfare state edifice was based on the Beveridge Report, written in 1942. It pledged a war on the “giant evils” of squalor, ignorance, want, idleness and disease. In time, a thicket of entangled benefits emerged: receipt of some allowances made people eligible for others. These measures did much to reduce abject poverty, but

97 little to stop the growth and sprawl of unaffordable welfare-state benefits: housing benefits, unemployment benefits, disability benefits, health benefits, child benefits, age benefits, etc.

The Labour Party introduced a National Health Service so that health and medical facilities could be made available to every person without regard to their ability to pay. Although no one was compelled to join the NHS, more than 90 percent of the population and of the doctors were brought under the scheme. The Labour Government also set up a comprehensive cradle-to-grave scheme of social security. The system provided protection against sickness, unemployment and old age – supplemented by maternity grants, widows’ pension and family allowances. These programmes were subsequently augmented with the extension of educational opportunities on school and university levels.

Because Britain was much less affected by war damage, it emerged from the war as the biggest European economy. In the 1960s and 1970s it became apparent that Britain was suffering from some kind of wasting disease. The British share of the world trade showed a continual decline. In 1955 Britain still generated 20 percent of developed country exports. By 1977 its share declined to only 10 percent. In 1939 Britain’s standard of living was second in the world only to the USA. By 1980 it was trailing all the European Community countries, as was the productivity of British workers. By any conceivable comparative standard of measurement, the British economy had performed badly and the gap widened with continental economies.

The major problem proved to be its strike-prone labour force. Trade Unions could not be sued for breach of contract and wild-cat strikers even continued to receive social security benefits. Industrial plants became obsolete as a result of chronic under-investment. Britain struggled chronically with the weakness of its currency and a deficit in its balance of payments. In a sense the British were fighting over eggs, but neglected feeding the hen. To make the outlook worse it transpired that Britain, of all the advanced Western states, had the smallest proportion of its young people in secondary and higher education. It also had the smallest percentages in the fields of science and technology.

Britain experienced serious problems with its nationalised industries. In contrast to private business where the system of profit and loss accounts as well as the bankruptcy law exercises a crude but effective discipline, public enterprises had no comparable control system. If there are losses, no one goes bankrupt. If the enterprise or industry is in the red, management can either increase prices or receive cheap credits or subsidies from government because it has a monopoly.

By 1976, with a large proportion of its population virtually on the dole, the UK was forced to borrow money from the International Monetary Fund to protect the Pound and to stay afloat. As a condition for the loan, the IMF required a sizeable cut in public expenditures. Labour Prime Minister Callaghan told the annual Labour Party Conference: “ We used to think that you could just spend your way out of a recession to increase employment by cutting taxes and boosting government spending … that option no longer exists … and insofar as it ever did, it worked by injecting inflation into the economy”. (See Yergin & Stanislaw, The Commanding Heights – The Battle Between Government and the Market Place That Is Remaking the Modern World, Simon & Schuster, New York, 1999, p.104)

By the end of 1979 the country was in serious crisis. Public-sector employees struck, hospital workers stopped working, medical care had to be severely rationed, garbage was piling up in the streets, striking grave-diggers refused to bury the dead, truck-drivers went on strike, trains ran irregularly and coal miners went on strike. Finally, the Labour Government fell on a vote of no confidence.

98

Thatcherism

When the Conservative Party under Margaret Thatcher came to power by 1979, the unproductive inefficiency of Britain’s nationalised industries and the disruptive labour relations reached a level of total unacceptability. What essentially needed to be changed was the political culture and socio- economic mindset of the country.

As Leader of the Opposition from 1974 onward, Thatcher became known as a strong free market supporter. She carried a copy of Hayek’s The Constitution of Liberty in her briefcase. Assisted by Keith Joseph, she campaigned across the country against all the manifestations of statism and the Keynesian- collectivist mould. The Institute of Economic Affairs (IEA) led by economists Ralph Harris and Arthur Seldon provided much of the research data. The IEA also provided a platform for two ground-breaking economists on the international scene: Friedrich von Hayek from the free-market “Austrian School” and Milton Friedman from the “Chicago Monetarist School”. Hayek called for a shift back from Keynesian macro-economics and the world of the multiplier, to micro-economics and the world of the firm where wealth was actually created.

In their election campaigns Margaret Thatcher and Keith Joseph sought “conviction politics” instead of “consensus politics”. They set out to challenge the entire consensus upon which the mixed economy rested: that the economy was “over-governed, over-spent, over-taxed, over-borrowed and over- manned”; that “the private sector is the indispensable base on which all else is built”; that the “... wealth-creating, job-creating entrepreneur and the wealth-creating, job-creating manager” are essential. Capitalism was presented as “... the least bad way yet invented”.

When Margaret Thatcher became Prime Minister in 1979, the daughter of a grocery store owner, Alfred Roberts, told the world: “I owe almost everything to my father – particularly integrity. He taught me that you first sort out what you believe in. You then apply it. You don’t compromise on things that matter”. She said her father imparted to her essential principles and examples – about hard work, self- reliance, thrift, duty and standing by your convictions even when in a minority. She went to Oxford University and graduated in chemistry and later also studied for the bar and became a lawyer, specialising in patents and tax.

During the 1980s much of Margaret Thatcher’s programme became known as Thatcherism – a combination of privatisation, patriotism, hostility to trade unions and above all a belief in people taking responsibility for themselves instead of expecting the state to take responsibility for them. She tirelessly propagated the traditional qualities of self-reliance, diligence, thrift, trustworthiness and initiative. She laid the foundations for the recovery of the British economic system and supervised momentous political and social changes. She incessantly argued that government was doing too much. She set out to replace the “nanny” state with its “cradle to grave coddling” with the rewards of an “enterprise culture”.

She had to contend with an inflation rate of 20 percent and interest rates of 16 percent, with enormous pay increases earmarked for public-sector workers, state-owned companies draining money out of the Treasury, monopolistic nationalised industries and monopoly trade unions. She introduced huge and controversial cuts in government spending and the programmed public-sector pay hikes had to be rolled back. As Prime Minister she was as unpopular as any prime minister had been since the start of opinion polling.

99

In the general election in 1983, she won with a huge landslide, a 144 seat majority. She was now in a position to pursue a full-blown programme of Thatcherism: a rejection of Keynesianism, a constraining of the welfare state and government spending, a commitment to the reduction of direct government intervention, a sell-off of government-owned businesses, a drive to reduce high tax rates and a commitment to reduce the government’s deficit. A big hurdle still remained: the overwhelming power of the unions. In 1984 she took on the unions. The unions were, at the time, financed by Libya’s Gaddafi and by the Soviet Union. After a year the miners’ union capitulated and decades of labour protectionism came to an end.

The next step was the privatisation of the nationalised industries: British Telecom, British Gas, Heathrow Airport, North Sea Oil, British Petroleum, British Steel. In the process the Exchequer Account was filled with billions to reduce government debt. Thatcher believed in privatisation to achieve her ambition of a capital-owning democracy – “… a state in which people own houses, shares, and have a stake in society and in which they have the wealth to finance charity and to pass on to future generations”. Thatcherites disbelieved in government knowledge, as “… governments enjoy no unique hotline to the future”.

State-owned companies proved in practice to be highly inefficient, inflexible, politically pressured to maintain and expand employment far beyond what is needed, unable to resist the wage pressure from public-sector unions and thus becoming major generators of inflation. They piled up huge losses which they solved by “recourse to the bottomless public purse”. What was missing was the discipline of the market’s bottom line.

When exchange controls were scrapped by the Conservative Government, the free flow of capital opened the city to broader international markets. The “Big Bang” reforms introduced by Margaret Thatcher in the mid-1980s modernised the City of London’s financial practices and lured a host of big American banks to London. A plethora of financial regulators were replaced by a single authority, the Financial Services Authority (FSA) that oversaw all of London’s financial markets. The old city’s “Square Mile” expanded to Canary Wharf in the old docklands area of east London. Heathrow became the destination of more than 70 million passengers per year as London became a hub in areas such as fund management and derivatives trading.

Margaret Thatcher’s third electoral victory in 1987 was also the beginning of the end of the Thatcher era. She spoke about a new bureaucratic monster rising up in the European Community’s Brussels offices and opposed joining a single European currency. She was accused of a domineering leadership style and of propagating a nationalistic opposition to the idea of a European Community. Her leadership of the Conservative Party was contested from within and after she withdrew her candidature, John Major succeeded her as Prime Minister.

Margaret Thatcher’s left-wing adversaries described her as self-righteous, divisive, rigid and uncaring. In a 1985 Anglican Church report she was attacked for putting economic efficiency ahead of welfare. She retorted that churchgoing is not about wanting “social reforms and benefits” but about “spiritual redemption”. Thatcher herself was a regular Methodist churchgoer.

In the international arena her contribution was unequalled in the post-war era. Together with Ronald Reagan she encouraged and facilitated the progress of Gorbachev’s Glasnost and Perestroika policies which ultimately led to the implosion of the Soviet Union. She played an important role in paving the way for the release from prison of Nelson Mandela and the subsequent constitutional change in South

100

Africa. Her strong opposition to Britain joining the euro-zone or her opposition to a deeper European Union, though leading to her political demise, proved to be prescient. In years to come, her role as a timeless social and geopolitical visionary is likely to be better appreciated as future historians unpick the factors underlying the decline of the advanced economies of the Western World.

Margaret Thatcher’s legacy proved powerful and lasting. Most of her policies were emulated throughout the world – particularly by left-wing governments after the implosion of the socialist model of the Soviet-Union which started in the late 1980s. The speedily revised policies of Labour Governments in Australia, New Zealand, Canada and subsequently also the “Third Way” policies of Clinton, Blair and Brown were all based on the principles of Thatcherism and Reaganomics. It is quite remarkable how deftly left-wing socialist labour parties scampered towards the centre-right to steal the clothes of their political opponents. Labour leaders suddenly emerged as would-be free-market proponents. (See Yergin and Stanislaw, op.cit. pp.110-123)

Blair’s Third Way

The last decade of the 20th century and the first decade of the 21st century have been characterised by the continued momentum of the free enterprise model. Some countries, such as Russia, Indonesia and Japan experienced steep downturns. But the world economy powered on – largely in the wake of the continued growth of the American and European markets. The biggest beneficiary was the emerging “Confucian capitalist” enterprise economy of China.

The United Kingdom has been governed by “Third Way” policies since 1997, which, in essence, were a continuation on the path already marked out by Thatcherism. Even The Economist pronounced Mr. Blair, after a year in office, “as proving to be a pretty good Tory after all”. New Labour in power proved to be Tory government by other means. The only difference appeared to be one of style. Margaret Thatcher was forced to be confrontational in order to change the political culture. Tony Blair could afford to be smooth, consensus-seeking and inclusive, especially serving as a good marketing vehicle for Thatcherism. Old Labourites started accusing Mr. Blair as being merely a Tory in disguise.

The essential point to make is that policy-making in the United Kingdom became less ideologically driven. It became more pragmatic in the sense of seeking evidence-based policy options that work. The left seemed to have liberated itself from outdated preconceptions.

Tony Blair has been the main driver of modernising the political approach of the British Labour Party. He realised that economic growth and wealth creation are the key to national prosperity – without which Labour’s traditional social concerns could not be addressed. He realised that competition is the key and supported public-private partnerships and competition in public service delivery.

Blair also modernised the Labour Party’s approach to electoral campaigning by reliance on consultant- inspired initiatives – to some critical observers over-reliance on gimmicky political spin. Although it is a perfectly legitimate concern for political leaders to market their political messages, a problem arises when a successful PR strategy outpaces the implementation of a well-managed quality service delivery. It leads to a gaping mismatch between promises and achievements.

101

Compassionate Conservatism

While in opposition, the Conservative Party under the leadership of David Cameron adopted the idea of “compassionate conservatism” as philosophical underpinning to its policies. It was meant to tune into the popular frame of mind to be “open minded” and to show compassion to society’s unfortunates. But Mr. Cameron’s camp has been keen to show that Conservatives are sceptical about the role of the state. Conservatives do not believe that every problem can or should be solved by state action through a series of “top-down” initiatives. Rather, it should be dealt with by the “connected society” based on culture, identity and belonging, linked together by decentralised intermediate institutions called the “third sector” – an umbrella term for voluntary, not-for-profit and charitable organisations. These institutions should be called upon to take over the delivery of many state services, especially in extending help to the victims of “state failure”. They were particularly keen on harnessing the creativity and energies of the “third sector” in delivering programmes to get people off long-term incapacity benefits. In this way Conservatives were trying to nibble at the edges of state activity and to confront the immense power of the public sector unions.

The 2008/2009 Global Financial Crisis

When Tony Blair handed the keys to Downing Street No.10 to his former Chancellor, Gordon Brown in 2007, the scene was set for the Labour Party’s return to its leftist tilt. The door to this policy retreat from the centre-field was opened by the onset of the Global Financial Crisis by the first quarter of 2008. Mr. Brown called his large stimulation plan “Building Britain’s Future”, but his widespread cash handouts essentially resulted in more “entitlements”. This fiscal laxity inevitably placed a heavy burden on future efforts to repair the gaping hole in public finances. As former Chancellor, Mr. Brown must have known that the cost of servicing rising debt will rise, that spending on the unemployed will keep going up, as jobs do not begin to recover until the economy as a whole does. Gordon Brown ignored the basic rule of public finance that bank bail-outs and off-balance sheet spending must be accounted for at some point.

Despite Labour’s initial declarations of “prudent” intentions, it went on a spending splurge and continued to increase the number of public-sector employees. The state’s share of GDP rose from 37 percent in 2000 to 48 percent in 2008 and 52 percent in 2009. During Labour’s 13 years in power, two-thirds of all new jobs created were driven by the public sector and their pay increased at almost twice the rate of the private sector. (See The Economist, “The Growth of the State”, January 23rd, 2010, pp.21-23)

Given that the recession was the worst in post-war history, with output down by around 5 percent since the start of 2008, it was generally agreed that urgent monetary and fiscal measures had to be taken. The Bank of England reduced the base rate to a record low of 0.5 percent and started a process of quantitative easing (creating money) by buying £125 billion (equal to 9 percent of GDP) of securities, mainly gilt-edged government bonds.

The UK found itself once more in the grips of a serious financial crisis: over-extended banks, an over- indebted private sector, an overweight public sector and too many citizens depending on government support. It was a country living above its means. The Economist of 27th March, 2010, reported that as a percentage of GDP the budget deficit was forecast at 14 percent, the public debt at 80 percent and the total debt at around 400 percent. Sterling had lost around 25 percent of its trade-weighted value since the middle of 2007. Growth prospects were limited. The road to recovery clearly required the

102 shrinking of bloated benefit rolls, the reduction of spending on government employees, the curtailment of the regulatory burden on business and a return to a period of sobriety and austerity all round. A change of government became essential.

The Liberal-Democratic/Conservative Coalition

The general election of 6 May 2010 delivered a “hung-parliament” result: Conservative Party 307 seats, the Labour Party 258 seats, the Liberal-Democratic Party 57 seats and the smaller parties 29 seats. Although it failed to obtain an absolute majority, the Conservative Party made spectacular gains. The Conservative Party retained its power base in England whereas Wales and Scotland remained Labour territory – indicating the persistence of old “tribal” anti-English and pro-Gaelic loyalties. Generally speaking, the Labour Party’s power base remained the lower income, urban voters. The Lib- Dems relied heavily on the support of the young voters and adults with flexible party preferences. The election outcome meant the defeat of the Labour Party under Gordon Brown, the ascendancy of the Conservative Party under David Cameron with the Liberal-Democratic Party under Nick Clegg as “king-maker”.

The Lib-Con coalition government that came to power in May 2010 was faced with an enormous set of challenges: closing the fiscal gap while protecting a still delicate economy; imposing severe clampdowns while securing public support through a distribution of pain that is perceived to be fair; and, striking a balance between current and capital spending. Its first step was to set up a new Office for Budget Responsibility (OBR) in charge of fiscal forecasting. The OBR promptly estimated the deficit for the financial year to March 2011 at £155 billion, or 10.5 percent of GDP.

On June 22nd, 2010, George Osborne, the Conservative Chancellor of the Exchequer, working with Danny Alexander, the Liberal-Democrat at the Treasury in charge of controlling spending, introduced the first Lib-Con budget. It contained a raft of far-reaching and courageous measures including spending cuts and tax rises intended to transfer the country’s deficit to a surplus within five years. The spending cuts included reducing welfare payments, freezing public sector salaries and reducing spending in all government departments. To strengthen the revenue side, VAT was increased from 17.5 percent to 20 percent and capital gains tax was raised from 18 percent to 28 percent. To encourage economic growth through business enterprise, company tax was lowered to 24 percent.

A Slow Recovery beset by Uncertainties

The coalition government, led by David Cameron, inherited a lamentable legacy: unemployment approaching 8 percent of the workforce, a currency under pressure, slumping business investment, heavily indebted households and a shaky economic outlook.

By the end of 2011, it was clear the British economy continued to founder, thwarting the coalition’s plans to eliminate Britain’s structural deficit by 2014-15. It meant that in 2011 the growth rate would be 0.9 percent and not the 2.3 percent forecast in the June 2010 budget. This state of affairs committed the coalition government to radical fiscal consolidation. Slow growth was thinning tax receipts and more spending had to be done on out-of-work benefits as unemployment was expected to approach 9 percent by the end of 2012. The expected deficit for 2014-15 was predicted at 4.5 percent of GDP with the national debt rising to £112 billion, at 78 percent of GDP.

103

The main reasons for the continued weak economic growth prospects were the euro-zone survival struggles and the overhang of the previous government’s debts. The coalition government was expected to lose more support from public-sector workers as a result of its decision to freeze the pay of public sector workers for two years followed by a 1 percent cap on their pay rises for the two subsequent years. The government also faced mass strikes of teachers and others as a result of its decision to make public-sector staff work longer hours, pay more for their pensions, have their retirement age raised from 60 to 65 and pensions based on career average salaries rather than on final pay. The Tory Chancellor was taking on a public sector that had grown fat during the free-spending Labour years.

Despite the courageous efforts of Chancellor George Osborne, the British budget deficit for 2011 climbed to 10 percent of GDP. Only the USA, Greece and Ireland had bigger deficit ratios during the same period. The Bank of England launched several rounds of quantitative easing and kept interest rates at a low of 0.5 percent. Inflation crept up to over 4 percent, exceeding the central bank’s target of 2 percent. It was understood that the Bank of England aimed at allowing nominal GDP growth (real growth plus inflation) at a rate that would be consistent with its inflation target.

The coalition government introduced a £250 billion National Infrastructure Plan to 2015 and beyond. The plan provides for increased annual contributions by government with institutional investors such as pension funds stumping up at least £20 billion annually. These projects are aimed at improving railways, power stations, roads, internet access to boost short-term economic activity and longer-run productivity. It was argued that laying a pipeline of projects would help cut project costs and delays. The government expected the private sector to finance two-thirds of projects for the period up to 2015. The Conservative Government also created a new device to raise money off-balance sheet, known as the “private-finance initiative” (PFI). It also initiated an investigation into allowing local authorities to be given more flexibility to borrow against future tax revenues – provided that such loans be used to finance infrastructure projects as stimuli for job-creating economic growth.

Innovation and infrastructure development were once Britain’s main strengths. The Victorians invented the railway and exported it to the rest of the world. London still flushes its waste into a 150- year old sewerage system. The long lifespan of these schemes shows the benefits of sound infrastructure investment.

The coalition government has been chipping away at welfare benefits: introducing benefit caps so that no household receives more than the average working wage and tightening rates of increase below the inflation rate and merging several different benefits or allowances into one payment. These squeezes have caused rows and protests, but a growing proportion of the electorate supported the conviction that a less generous system would encourage people to stand on their own feet. However, politicians have been careful not to meddle with the pensions, benefits and perks for the aged. Britain faced the additional problem that its welfare system was still largely paid for out of general taxation and was resistant to radical change. Several North European countries had over several decades introduced contributory social security systems based on the principle that welfare should be an insurance against hardship, not a way of life.

To trim the British welfare state proved to be a huge political challenge since close to a majority of voters are dependent on the public purse largesse. Leftist commentators habitually accuse the Tories as being privileged, aloof and hostile to public services and subject to bouts of cruelty and nastiness. In

104 such a political climate, essential austerity measures which require the co-operation and support of all sides of politics, fade out of reach.

By the end of 2011 the employment picture remained forbidding. Private sector job growth was weak and public sector jobs in continuous decline. In excess of 2.6 million people, representing 8.3 percent of Britain’s potential workforce were unemployed. Many people had languished on benefits for years – a sign of systemic unemployment taking hold.

Benefit reform became an essential priority: capping the amount paid to a single family and getting more people off handouts into long-term jobs and keeping them there. Job seekers had to be assisted to acquire trade skills and to learn how to use the internet and personal networks to sniff out local jobs. The payment-by-result system required an overhaul because contractors were missing their targets and abusing the funding pots. Placing one unemployed person in a “sustainable” job earned a “contractor” around £4000. Welfare support needed to be turned into a last resort, not a feeding trough for exploiters nor a way of life for work-shy elements.

By early 2012 the British economy remained sclerotic. Unemployment stood at 8.4 percent, inflation hovered around 4 percent and the prospects for economic growth remained poor. The governor of the central bank, Mervyn King, predicted that the economy was likely to zigzag in and out of positive territory at least for the whole of 2012 due to continued euro-area distress, poor productivity growth and limited business expansion. The Economist’s economic growth forecast for 2013 stood at 0.75 percent characterised by continued deficits and uncertainty.

London as Financial Centre

London has been at the heart of a great overseas trading and investment network for centuries. In the early 1960s, London’s status as a financial centre was in gradual decline, reflecting Britain’s waning importance in the global economy. Then the American government helpfully imposed regulations and tax levies that encouraged investors to hold a lot of dollars offshore. These interventions enabled London to develop a lucrative offshore lending business. Over the years, London built on that opportunity by welcoming foreign market-makers and by offering a regulatory structure that seemed more appealing than those on offer in Paris or Frankfurt. Favourable tax laws encouraged the global elite to spend part of their time in Britain.

By 2002, London, as a financial centre, ranked only second to New York. It scored very high on the key criteria that global financial firms are looking for: lots of skilled people, ready access to capital resources, good infrastructure, attractive regulatory and tax environments, perceived low levels of corruption, an accessible location and, naturally, proficiency in the language of global finance in the home of English. In 2007 the financial services of London accounted directly for around 20 percent of the British GDP and over 30 percent of its tax revenue. It was claimed that London surpassed New York in structured finance and new stock listings. It accounted for 24 percent of the world’s exports of financial services (against 40 percent for all of the USA). It had a two-thirds share of the European Union’s total foreign-exchange and derivatives trading and 42 percent of the EU’s share trading. The London Stock Exchange (LSE) was claimed to be the world’s “most international capital market by a considerable margin”. (See Julie Sell, “Magnets for Money” – a special report on financial centres, The Economist, September 15th, 2007, pp.3-11)

105

In 2012, The Economist published another Special Report on London entitled “On High”. It argued that the biggest threat to London’s future was its relationship with its hinterland: “Three groups of people are particularly unpopular in Britain at the moment – rich people, bankers and immigrants. Since London depends on them for its prosperity, policies aimed at making life harder for them will hit the capital”. (op.cit. p.15)

Since 2010 the tax system has become somewhat less friendly to well-off people after the top income- tax rate went up to 45 percent. Hostility to the banks was aggravated by their role in causing the Global Financial Crisis and the many billions required to prop up the banks subsequently. The regulatory regime had been tightened and bankers’ bonuses limited to 100 percent of basic salary.

In recent times, attitudes towards foreigners hardened although British-born Londoners are friendlier towards migrants than countrysiders. Subsequently the government turned its main focus on the ability of foreign students to do paid work – despite the fact that the foreign student industry is worth about £15 billion a year. Getting visas has become harder. But on the whole London still has a lot going for it and Britain depends on London’s prosperity. Wholesale finance is one of the few industries in which Britain has large net export earnings – usually close to 3 percent of GDP per annum. A further 15 percent of GDP is typically contributed by related services, such as law, accounting and consulting.

Evaluation

What took around three centuries to build was dismantled by decolonisation in around three decades. What had been based on Britain’s commercial and financial supremacy in the seventeenth and eighteenth centuries and her industrial strength in the nineteenth, crumbled under the burdens of two world wars. The great creditor became a debtor. Since the 1950s, the great movements of population changed their direction. Emigration from Britain gave way to immigration to Britain. So controversial has this “reverse colonisation” been that successive governments have imposed severe restrictions.

Despite the decline of its colonial empire, Britain continued to serve as a trend-setting force in today’s world. It has a sound legal system providing a secure foundation for economic interaction based on generally accepted business practices and the rule of law. Its judiciaries are independent enough to guarantee the rule of law. The levels of educational attainment are high enough to provide in most of society’s need for technical and managerial skills. During the past two decades industrial and labour relations were reasonably stable. Respect for law and order, a transparent system of rules, effective business networks and support systems still provide an enabling environment for value-adding business enterprise.

Niall Ferguson expressed his assessment of the legacy of the British Empire as follows: “Without the spread of British rule around the world, it is hard to believe that the structures of liberal capitalism would have been so successfully established around the world. Those empires that adopted alternative models – the Russian and the Chinese – imposed incalculable misery on their subject peoples. Without the influence of British imperial rule, it is hard to believe that the institutions of parliamentary democracy would have been adopted by the majority of states in the world, as they are today … Finally, there is the English language itself, perhaps the most important single export of the last 300 years. Today 350 million people speak English as their first language and around 450 million have it as their second language. That is roughly one in every seven people on the planet.” (See Ferguson, Empire – How Britain Made the Modern World, Penguin Books, London, 2003, pp.365-366)

106

Today, the City of London remains a source of comparative advantage. It is the world’s leading centre for cross-border bank lending and marine insurance. It accounts for 40 percent of global turnover in foreign exchange and dominates the market for bespoke interest-rate derivatives. It is only second to America as a home for hedge funds, private equity and “full-service” investment funds. Its comparative advantage also includes its time zone position. It sits between the close of Tokyo’s market and New York’s opening. The use of English gives London an edge over Frankfurt, Paris or Milan as Europe’s main financial centre. Its legal system is a boon where parties are from different countries and contracts need to be drawn up within the jurisdiction of a stable body of commercial law and experienced judges. Because Britain itself is open to foreign direct investment, London is given an edge in arranging cross-border deals. London’s long-term prospects depend on its ability to sustain these attractions.

The British economic pride has been severely bruised by the large accomplice role played by its financial institutions in precipitating the Global Financial Crisis of 2008/2009. Its large and influential financial services sector stands accused as co-respondent in the forum of prudential international finance. The financial sector has stumbled and loud calls have been made from many sides for bold re- regulation. The world of finance domiciled in both New York and London exercised inordinate power and influence over the ebb and flow of international finance. Sometimes it acted as a force for good, but also, in recent times, as a force for manipulation and destruction.

The Economist of October 18th, 2008, gave expression to its strong opposition to the re-regulation of the financial sector: “Heavy regulation would not inoculate the world against future crises. Two of the worst in recent times, in Japan and South Korea, occurred in highly rule-bound systems. What’s needed is not more government but better government. In some areas, that means more rules. Capital requirements need to be revamped so that banks accumulate more reserves during the good times. More often it simply means different rules, central banks need to take asset prices more into account in their decisions. But there are plenty of examples where regulation could be counter-productive …. Indeed, history suggests that a prejudice against more rules is a good idea. Too often they have unintended consequences, helping to create the next disaster, … If regulators learn from this crisis, they could manage finance better in the future. Capitalism is at bay, but … for all its flaws, it is the best economic system man has invented yet.”

Another perennial source of tension is Britain’s schism between taxpayers and tax-eaters. Britain’s north has long belonged to Labour and its south (outside London) to the Conservatives. Of the 197 MPs representing the English south, beyond London’s boundaries, just 10 are Labour. In contrast, the Tories hold only two seats in the north-east and only one in Scotland. In the north there are more public-sector workers and they tend to vote Labour. In the south there are more rich people and they tend to vote Conservative. But this political geography even trumps social class and employment patterns because even higher income earners in the north vote labour and poorer communities in the south vote Conservative. It seems that Scots and Welshmen tend to support Labour and Englishmen tend to be Conservative.

When the 1997-2010 Labour government increased public spending in the north, it strengthened its support base. When the Conservative-led coalition began to cut public-sector jobs they also strengthened Labour’s position in the region. The same also happens when they cut social or welfare benefits, which are a large part of incomes in the region. At every election, the Liverpool offices of

107

Unite, a trade union with some 250,000 public-sector members, becomes the engine room of the local Labour campaign. The Midlands and the North with their strong collectivist loyalties are Labour’s “client states”. (See The Economist, briefing entitled “Divided Kingdom”, April 20th, 2013, pp.22-24) The UK’s regional divide is likely to persist because additional public-sector cuts are essential. The south, in contrast, is historically less dependent on heavy industry and has higher levels of private- sector employment in Britain’s successful service and knowledge industries. In the left-wing northern cities, Thatcherism stands for austerity, impoverishment and southern contempt of the north. A Labour government in White Hall means opening the sluice gates of public-purse largesse – which is an attractive option for an electorate facing a continuous economic slump and a drop in living standards.

The Economist of April 27th, 2013, p.46, reported that Britain’s economic performance over the past five years ranked second worst in the G20 club of big economies – only Italy did worse. It further stated that the cupboard of marketable government business assets is empty. There is not much left to sell in order to reduce headline borrowing. The remaining outfits such as Royal Mail, the Royal Mint, the Land Registry, the Met Office, Ordinance Survey, Channel 4, the BBC and the state’s share in the Eurostar rail link are generally lacking in commercial value or are valued too highly by the public to sell. If they can find buyers the government might be forced to flog even unattractive assets. (See The Economist, “The Last of the Silver”, April 27th, 2013, p.47)

In moving forward, the UK is confronted with important choices in relation to the strengths and weaknesses of its economy. In 2012 the sum of British goods and services traded was around 70 percent of GDP – at £1 trillion. Goods made up about 62 percent of Britain’s exports – mainly machines, pharmaceuticals, cars and oil. But Britain imported more than it exported. These figures reflect the drastic deterioration of Britain’s current account since the late 1800s when around 35 percent of global exports were British. British steel and coal exports fuelled a global railroad boom and its total exports were more than double its imports, producing foreign credits rather than debts

Although most of Britain’s foreign export destinations today are other European countries, non-EU trade is equally significant – both for goods and services. The largest trading destination for British exports is America, then follows France, Germany, the Netherlands and China. Britain’s EU-trade seems to be nudged down the list as links with the Americas and Asia are increasing.

In view of the moribund nature of the European Union, more attention has to be directed to its own domestic growth potential which lies outside the narrow perimeter of the City of London: its domestic businesses, large, medium-sized and small. Much needs to be done to loosen employment and planning restrictions, to unleash business development in post-industrial cities, to restrain energy and transport costs and by promoting access to cheap funding for SMEs. The UK has to rediscover its own small business sector which accounts for 60 percent of private-sector jobs. The travails of small firms are not limited to tight credit, they are also burdened with untenable bureaucratic red tape. It can take 73 weeks from finalising a project plan to commencing house construction. The construction pipeline is too long as a result of the regulatory overload. The inordinate levels of youth unemployment could be tackled by improvements to the content of education closing the gap between the world of education and the world of work.

These adjustments may prove to be easier said than done. If and when a sustained recovery would be under way, remains to be seen. Once an Empire has lost its creative spirit, it is difficult to regain its growth momentum. The UK’s productive resources have been overwhelmed and displaced by

108 entrenched redistributive forces which are not likely to give way in the near future. The unproductive forces have pushed British society into a culture of public-purse dependency that leaves little scope for the productive creativity that carries the base-load of generating taxable income.

Bibliography

Ferguson, Niall (2003) Empire – How Britain Made the Modern World, Penguin Books, London Henderson, David (1999) The Changing Fortunes of Economic Liberalism Institute of Economic Affairs, London Johnson, P. (1985) A History of the Modern World, Weidenfeld and Nicolson, London Wolfe, Alan (2008) The Future of Liberalism, Knopf, New York Yergin, D. & The Commanding Heights – The Battle Between Stanislaw, J. (1999) Government and the Market Place That Is Remaking the Modern World, Simon & Schuster, New York The Economist “Magnets for Money”, A Special Report on Financial Centres by Julie Sell, September 15th, 2007, pp.3-11 The Economist “Briefing on the British Economy”, March 27th, 2010, pp. 23-24 The Economist “On High - a Special Report on London”, June 30th, 2012, pp.3-16 The Economist “Divided Kingdom”, April 20th, 2013, pp.22-24 The Economist “Britain’s Economy – Just better than nothing” and “The Last of the Silver”, April 27th, 2013, pp.46-47

109

8. The Distinctive Nature of the German Model (April 2013)

Although the German peoples had lived in north-west Europe since the Roman times, the various German principalities were only unified as a nation-state by Bismarck in 1871. Within a few decades Germany became the best educated in the world, the first to achieve universal adult literacy and with its universities the world’s finest in virtually every discipline, it was a country where intellectual achievement was justly measured and treated with respect. The defeat of Germany in the First World War brought in its wake a huge war reparations burden on the , the outbreak of hyper inflation, the rise of Hitler’s Nazi Party, the scapegoating of Jews and the holocaust, the massive death and destruction of the Second World War, the division of Germany in two parts by the victorious Allied Powers after the war and the final re-unification under Chancellor Kohl in 1990 after the fall of Soviet Russia’s Iron Curtain.

In the period since the Second World War, Germany’s work ethic, export success, fiscal discipline and wage restraint enabled the country to become the engine room and financial anchor of the euro zone. Now, at the end of the first decade of the 21st century, Germany is confronted with two political- economic challenges of momentous importance. The one deals with safeguarding its economic prowess in the face of the far-reaching changes in its energy provision initiated by the coalition government between the Greens and the Social Democrats (SPD) under Gerhard Schroeder around the turn of the century. The other challenge is avoiding being swamped by the weight of its debt-ridden, economically stagnant euro-zone neighbours expecting Germany to share their debt burdens and to stand behind their bankrupt banks and profligate governments.

The Energiewende, as the Germans call their energy policy U-turn, involves a shift from nuclear and fossil fuels to renewables. The dislocations and distortions caused by this enforced transformation is now casting a shadow of uncertainty over the country’s manufacturing prowess. The problems involved with this transformation of energy provision and pricing is now exacerbated by the fall-out of the Global Financial Crisis (GFC) which started in 2008. As the GFC contagion spread from the USA and the UK into the euro zone, it exposed the broken balance sheets of the major European banks. The already heavily indebted European governments, Germany’s included, had to step in with additionally borrowed bail-out funds to rescue the banks.

As the GFC gradually transformed into a prolonged economic contraction, Germany, together with its northern neighbours, managed to return to a modest growth path. This was achieved through maintaining their export markets and repairing their public balance sheets through austerity programmes. Now Germany is being urged to stand behind the euro by accepting the “mutualisation of sovereign debt” and to shore up the banking system through “collective action”. These measures are, in effect, relying on German guarantees because the southern members of the euro-zone face mountains of debt and sclerotic growth prospects. To add insult to injury, Moody’s – an American ratings agency – has already sounded a warning that the AAA status of industrious Germany could be jeopardised by looming bail outs.

Early History

According to the Norman Davies history of Europe, the German tribes were identified in southern Scandinavia as Germani by Posinidius in 90 BC, by which time they were well into the task of settling the lands that have borne their name ever since. In the west they were bordered by the Celts and in the

110 east by the Slavs. They traded with the Mediterranean world since Bronze Age times and adopted Roman farming methods, including viticulture. Their clans were united by kinship and ruled in conjunction with a democratic assembly of warriors. (See Norman Davies, Europe – A History, Oxford University Press, 1996, pp.222-225)

Christianity, the “imperial religion” of the Roman Empire, also advanced into the German Rhineland and Elbe areas by the sixth century. During the Middle Ages (c750-1270), the Roman Empire gradually disintegrated and the power of Rome was displaced by the Empire of Charlemagne based in Aachen (c. 768-814). After his death his empire was divided between his grandsons at the Treaty of Verdun which created the core of both the future France and the future Germany.

The Middle Ages were characterised by the hierarchically feudal system, a dense network of contractual relationships which linked the highest to the lowest in the realm. Feudal contracts were recorded in charters and indentures. At the local level the fiefs of princes and barons were reflected in arrangements of manorial estates. The lord of the manor granted a plot of land to each of his serf families in exchange for service. In reality the feudal system was ridden by a confused mass of conflicting dependencies and loyalties, contested privileges and disputed rights. The result was a patchwork of authorities and principalities around stone castles, defended by cavalries and garrisons. With Christianity, the chivalry of knighthood became the twin pillars of the medieval mind. It moulded patterns of interaction, attitudes to property, the rule of law and relations between governmental authority and the individual. Germany was the most feudal country of all. (See Norman Davies, op.cit. pp.313-315)

Religious Tensions

The “Protestant” revolt against the selling of “indulgences” by the Catholic Church in Rome was started by Martin Luther, an Augustinian monk and Professor of Theology at Wittenberg, . He preached “justification by faith alone”, not through the intermediation of the Church, and nailed his sheet of 95 theses against indulgences to the door of Wittenberg’s castle church on 31 October 1517. Luther further violated a long standing prohibition on translating the Bible into a vernacular language. At the time there were several versions of German spoken in the German area. Germanic dialects had split into High (mountainous, southern) and Low (northern, flatland) varieties. Luther had to compromise and borrowed an emerging standard “chancellery German” as a base. He infused his translation with words he heard on the streets and in the towns and consulted widely to identify which dialectal words would be most widely understood. The result was that his translations of Biblical texts were widely read and remained the most popular translation up to the present time.

Since the start of the Reformation, conflicting Catholic and Protestant camps were formed which culminated in the devastating Thirty Years War (1618-1648) and which involved most of the states and rulers of Europe. It ended with the Treaty of Westphalia which set the ground plan of the European international order for much of the ensuing centuries. Germany lay desolate with its population reduced from 21 million to around 13 million. Cities stood in ruin and districts stripped of their inhabitants, their livestock and their supplies. A whole generation of pillage, famine, disease and social disruption wreaked havoc and traumatised German culture. Germany lost control of the mouths of its three great rivers – Rhine, Elbe and Oder. Destitution was accompanied by humiliation. (See Davies, op.cit pp.567-568)

111

Ever since the Thirty Years War, Germany remained divided along religious lines. The western and southern parts remained predominantly Catholic while the central, northern and eastern areas remained strongholds of Lutheran Protestantism. These distinctions constantly introduced disruptive differences into German politics and together with the centuries-long political fragmentation into numerous principalities, stood in the way of the development of a national consensus on key issues.

During the Jewish Diaspora, many thousands of Jews settled in German areas in the course of centuries. After the expulsion of Jews from Spain in 1492, the Jewish Diaspora was again set in motion across Europe. Jewish refugees were confined to segregated areas in cities and towns (ghettos) in France, Italy and in central Europe. The Reformation was of huge benefit to the Jews because it broke up the monolithic unity of Catholic Europe. However, the very large numbers of Jews that sought refuge in German-speaking areas gradually aroused strong anti-Jewish feelings.

Martin Luther’s work Von den Juden und ihre Lügen (On the Jews and their Lies), published in 1543, is considered as one of the first literary expressions of anti-Semitism. Luther focused on their role as money lenders and their “usuriously extorted” wealth. He described Jews as “intellectually subversive”. Many Jews found refuge in Rhineland Catholic principalities where they lived as Marranos or Conversos (baptised). One expulsion provoked another as Christian bankers and craftsmen got Jews banned. As Jews were driven from Italy, Provence and Germany, they moved further east into Austria, Bohemia, Moravia, Poland, Lithuania and Russia.

The German anti-Jewish iconography was known as Judensau. The most potent and enduring of abusive stereotypes of Jews were of repellent, unclean, sub-human creatures that should be excluded from civilised society. In reality, Jews remained in Germany in large numbers and over many generations also produced individuals who made significant contributions to German society: Marx, Wittgenstein, Mendelssohn, Heine, Meyerbeer, Mahler, Johann Strauss, Freud and Einstein.

Despite Germany’s long tradition of anti-Jewish feeling, Jews felt at home in Germany. It was a society which honoured and revered intellectuality. Jews could move from a yeshivah into one of Germany’s universities where intellectual achievements were treated with respect. Several German Jews were awarded Nobel prizes in medicine, chemistry and physics. The habit of constant occupation is something that is generally deeply instilled in German households and it rubbed off on all newcomers. (See Johnson, P., A History of the Jews, Phoenix Press, 2003, pp.242-310)

The German Unification of 1871

When the German nation-state came into being under the leadership of Prussian Chancellor Bismarck, many parts of the German cultural community were not included: the Germans living in contiguous areas in France, the Alps, Poland, the Baltic states and Austria. The unification of 1871 created a federation of authoritarian principalities, brought together not of and by the people, but for them by the astute leadership of Bismarck. Germans thenceforth tended to associate national fulfilment not with liberal democracy, but with the strong hand of an inspired political leader. The system of executive leadership was not deeply embedded in a democratic political culture based on effective checks and balances. This vacuum left the door open for a bungling, inexperienced, arrogant Kaiser Wilhelm II, an ineffective and a political vacuum for a power-craved . What had become the most populous industrially developed and militarily powerful nation in continental Europe, found itself at the mercy of disastrous leadership. German society was not able to

112 furnish itself with the constitutional trappings of maintaining effective and responsible democratic leadership.

The famous German sociologist, Max Weber, wrote the following words after the Kaiser had blundered Germany into World War I: “Bismarck left behind ... a nation accustomed to submit, under the label of constitutional democracy, to anything that was decided for it without criticizing the political qualifications of those who took the reins of power into their hands.”

This perceptive assessment by Max Weber applied a fortiori to the arrival of Hitler’s Third Reich. At the time, the 1920s, the 1930s and the 1940s, the Germans possessed no nationally meaningful and universally relevant philosophical direction – in the form of a clear sense of purpose which might have inspired, oriented and united public opinion and national aspirations. The Germans suffered from a perennial uncertainty as to their proper role in world affairs and the ideals that the German state purports to represent. Without such ideals, Germany had to rely on the unifying inspiration of either cultural traditions or nationalistic bravado. The state remained “above” the people rather that “of” or “by” them as a collective embodiment of their values and feelings of moral responsibility. (See Mayer, J.P., Max Weber and German Politics, Faber & Faber Ltd, 1943, pp.58-59)

The First World War

The First World War (1914-1918), was sparked off by the assassination of the Austria-Hungarian Archduke in Sarajevo, Bosnia, by Serb nationals. The explosive international atmosphere that was sparked off into a global war was created over several decades by opposing geopolitical diplomatic and military blocs. Bismarck was able to construct a system that would protect Germany from French aggression with his Dreikeiserbund of Germany, Austria and Russia and later the Triple Alliance between Germany, Austria and Italy. But France’s loathing for Germany drove her into the arms of historical colonial rival Britain in the Entente Cordiale, which was extended to the Triple Entente of France, Britain and Russia in 1907. The cauldron of rivalries, fears and hatreds steadily escalated to explosive proportions. Within weeks after the gunshots in Sarajevo, Europe’s diplomatic and military restraints broke down. Vienna wanted action against Serbia and received carte blanche from Berlin. Russia’s Imperial Council mobilised support to Serbia, which caused Germany to issue ultimatums, first to Russia, then to France. The British government sent an ultimatum to Germany. The five major European powers then embarked on a catastrophic war which took an estimated 20 million lives and around 17 million additional casualties. The instruments of destruction had outstripped anything previously known and spread across the globe.

When the dust finally settled, Europe lay in ruins. The democratic Western Powers could only survive by calling in the assistance of the USA. The Germans were completely overpowered by the combined forces of the rest of the Western world. Though ruined and ransacked, it remained Europe’s most dynamic but also disgruntled nation-state.

The Weimar Republic

The Weimar Republic was destined to fail. It inherited the politically unintegrated condition of German society, the disappearance of the monarchy which was the symbol of the nation-state, the psychological consequences of a lost war, an astronomical debt burden of war reparations to be paid to the victors, the destruction of the population’s cash holdings through runaway inflation, and, by the end of the twenties, the impoverishment and demoralisation of a world-wide depression. The

113 governmental apparatus that was tasked to deal with these challenges consisted of a two-house , a cabinet, a prime minister (Chancellor) and a presidency. The upper or federal house (Reichsrat) was largely advisory. The real source of legislative power lay in the Reichstag, whose members were elected by proportional representation. The President, elected on a direct popular basis, was to be more than a figurehead. Effective political power in practice came to be shared by the Reichstag and the Presidency. The Reichstag could overthrow the Chancellor and the Cabinet by a vote of no confidence, but the President had the power to select the Chancellor and the Cabinet. The President could, in order to preserve safety and public order, “take measures as are necessary” – even to assume all legislative initiative himself. Initially the balance of power was held by a coalition of three parties: the Social Democrats, the Democratic Party and the Catholic Democratic Center Party. By the election of July 1932, the Reichstag included members of eight different groups, with Hitler’s National Socialists holding the support of 37 percent of the popular vote and the Communists 14 percent. As the liberal democratic center melted away, the economic crisis worsened and the Reichstag became more sharply divided and stalemated. The Reichstag was compelled to abdicate its legislative power in favour of a “constitutional dictatorship” in the hands of the President. On January 30, 1933, President von Hindenburg appointed Hitler as Chancellor. On the night of February 27, 1933, the Reichstag building was set on fire, allegedly by Communists. Acting through the President, Hitler proclaimed a state of emergency. In the March 1933 elections, the Nazis gained 44 percent of the popular vote – as against the 14 percent for the Center Party, 18 percent for the Social Democrats and 12 percent for the Communists.

Shortly thereafter Hitler introduced into the Reichstag a measure to enable laws to be enacted by the Cabinet and attained a two-thirds majority to change the Constitution after forcibly excluding more than a hundred opposing members from the Reichstag session. Four months later all parties other than the National Socialists were banned. Upon the death of President von Hindenburg in August 1934, Hitler merged the offices of Chancellor and President into the new office of Reichskanzler und Führer, which he immediately assumed himself. Henceforth Germany was to be Hitler, and Hitler’s police-state apparatus, was to be Germany.

The Totalitarianism of the Third Reich

The many brutalities of the Nazi Third Reich are explicable only as the actions of sick and evil men. In retrospect, however, it is clear that a combination of several internal and external factors enabled Hitler to mobilise enough support for the Nazis to capture the commanding heights and levers of power of Germany.

The chief components of a plausible explanation could be found in the following: - The Treaty of Versailles had stripped Germany of large territories and populations and imposed reparation debts payable over 42 years secured against the German state railways; - The humiliation of the French occupation of the Rhineland area in 1923; - The impact of post-war economic conditions, the hyper inflation of the 1920s followed by the Great Depression which started in 1929; - The impact of deep-rooted elements of anti-Semitism in Germany ranging from Lutheran Judensau to pseudo-scientific race theories of the Aryan herrenvolk coupled with popular perceptions of the corrupting influences of Jewish promiscuity, permissiveness and moral degradation of the German culture – particularly around what was considered as “decadent” Berlin;

114

- The impact of the emergence of the totalitarian political party structure which Hitler was able to bring to life with his exceptional organisational skills and demagogic oratory combining a socialist group, the German Workers Party with strong-arm ex-service nationalists; - The exploitation of the long-range threat of Soviet Communism as a dangerous geopolitical adversary (weltmacht oder untergang); - Hitler’s National Socialist Democratic Labour Party (SNDAP) being fed on fusing the grievances of both the Left and Right factions of the German voters.

To comprehend the ascendance of the Nazis from the early 1930s in Germany, it is necessary to understand the essential characteristics of totalitarianism as it developed in the Soviet Union, Fascist Italy and Nazi Germany as set out by Carl Friedrich, et.al, Totalitarianism in Perspective: Three Views, New York, 1969): - Central Ideology. All totalitarian systems are based on a central ideology that dominates all public discourse. The Soviet Union embraced “National Bolshevism” in contrast to the “National Socialism” of the Nazis. Where the Communists turned to Marx’s “scientific socialism”, the Nazis based their theories on eugenics and racial science. In both cases it proved to be pseudo-science. - Utopian Goals. Totalitarian movements find it necessary to set up Utopian goals to mobilise society, hoping to reach a New Order cleansed of all present impurities. In the Soviet Union they sought to cleanse their state from bourgeois impurities. In Italy it was the ideal of the restoration of a pseudo- historical fascist Roman Empire. In Nazi Germany they sought a Jew-free Aryan paradise lasting a thousand years. - Overriding Party-State. Once in power the totalitarian party used its own organs or personnel to oversee all existing government positions or institutions. State structures became subservient conveyor belts for executing the party’s dictates. - Leader Principle. The Fascists in Italy believed “Mussolini ha sempre ragione”, meaning the leader is always right. In Nazi Germany it was called the Führerprinzip, exacting slavish unquestioning obedience from underlings. It was the centrepiece of both Stalinism and Hitlerism. - Gangsterism. Just as professional criminal confraternities do, totalitarian elites habitually terrorise both their own members and their victims in order to eliminate rivals. They manipulate the law and use blackmail and extortion to take control of everything around them. - Bureaucracy. All totalitarian regimes required a vast army of bureaucrats, made up of droves of opportunistic individuals who were seeking rapid advancement of their own interests and standing by exploiting the platforms provided by the governmental structures. - Propaganda. Totalitarian propaganda owed much to the techniques of modern mass media advertising – employing emotive symbols and shameless demagoguery directed at the vulnerable elements of society. - Aesthetics of Power. Totalitarian regimes enforced a virtual monopoly in propagating an aesthetic environment which glorified the ruling regime and the heroic images of national myths and fantasies. - The Dialectical Enemy. Totalitarian regimes need to justify and legitimise their own policies and actions by setting up a dialectical enemy as a permanent and ubiquitous adversary to contend with. It serves the purpose of having a scapegoat to blame for everything that is wrong or irksome. - The Psychology of Hatred. To raise the national emotional temperature, totalitarian regimes constantly beat the drum of hatred against enemies within or without society. In Nazi Germany Communists and Jews headed the bill and alleged saboteurs were mercilessly pursued. - Censorship. Totalitarian ideology can only operate under the shield of pre-emptive censorship. Controlling all sources of information, unwanted opinions and facts are suppressed. Information that needs to be circulated is carefully prefabricated.

115

- Coercion and Genocide. Totalitarian regimes have pushed political violence beyond all previous limits with an elaborate network of political police and security agencies to keep opponents and undesirables under control. Where considered necessary, these elements are eliminated altogether. Mass arrests, shootings, concentration camps are the instruments used to eliminate enemies or to keep potential adversaries in a state of fear. - Collectivism. Social discipline and conformist behaviour are both cemented by all sorts of activity that strengthen collective bonds and weaken individual identity. Hence totalitarian regimes promote state run nurseries, youth movements, party rituals, military parades and group uniforms. - Militarism. Totalitarian regimes habitually magnified “external threats”, or invented them to rally citizens to the fatherland’s defence. Armed forces enjoyed high social prestige. - Universalism. Marxism-Leninism was always universally orientated, intending to spread its ideas and institutions around the world. The German Nazis also marched to the call of the world – morgen die ganze Welt. - Moral Neutrality. All totalitarian regimes shared the view that their goals justified their means.

In the early 1930s the Nazi Party established themselves as the largest single party in the Reichstag. After endless Cabinet crises Hitler was made Chancellor in December 1932 by President Hindenburg, hoping by this action to stem the rising influence of the Communist Party. A month later a mysterious fire destroyed the Reichstag building. The Nazis blamed the Communists and won 44 percent of the popular vote at the next election. They passed an Enabling Act granting the Chancellor dictatorial powers for four years. He organised a plebiscite to approve Germany’s withdrawal from the League of Nations, receiving 96 percent support. In August 1934, following Hindenburg’s death, he called another plebiscite to approve his own elevation to the new party-state position of “Führer and Reich Chancellor” with full emergency powers. This time he received 90 percent support. Hitler was now in control. The democratic process now produced a dictator who used his party’s elite guard, the SS (Black shirts) to wipe out the Party’s older formation of storm troopers, the SA (Brown shirts). He banned the Communist Party and then dissolved all the other parties.

Hitler’s economic policy was based on a combination of Keynesian financial management with complete state direction of industry and agriculture. The trade unions were replaced by a Nazi labour front and strikes were outlawed. A state-funded work creation programme ensured full employment: building the autobahns, the launching of Volkswagen and, above all, rearmament. In 1936 he introduced a Four-Year-Plan including a state-owned steel corporation. In 1935 Hitler reintroduced conscription.

In Mein Kampf Hitler set out his ideas about the German herrenvolk (master race) and Germany’s right to seek Lebensraum in the East. He divided mankind into a hierarchy of races: culture founders, culture bearers and culture destroyers. All great cultures from the past perished from blood-poisoning. Each healthy nation needs its own soil. He also believed in the “iron logic” of “racial purity”. Jews ought to be excluded from German citizenship and from state employment. Jewish traders should be boycotted and marriage and sexual intercourse between Jews and non-Jews forbidden. These measures were clearly defined in the Nurnberg Laws of 1935. The Nazis approved euthanasia for mentally and genetically handicapped. Nazi propaganda peddled strange notions about Teutonic Knights and created a Hitler personality cult of all that is wise and good. The Party Guard, the Schutzstaffeln and the Gestapo (Secret State Police) were used to supplement existing military and police forces. Nazi-run People’s Courts and People’s Judges increasingly absorbed the work of the traditional judiciary. Starting in 1936 some 500,000 German Jews were persecuted and expelled. Apprehension really

116 mounted after the Kristallnacht of 1938 when Jewish synagogues and shops were damaged or smashed.

In 1938 Hitler started to reach across the German borders when he engineered the Anschluss or merger with Austria. In 1939 he reclaimed Sudetenland from Czechoslovakia and then Danzig from the Poles after organising a rapprochement with Stalin. On September 1st, 1939, Hitler’s army invaded Poland and started the Second World War, facing a rather impotent array of Western Powers wracked by the fallout of the Depression years.

After the fall of Poland, 13 European countries were due to be overrun: 8 by Hitler and 5 by Stalin. In the summer of 1940, first the Low Countries and then France fell in a matter of weeks under the onslaught of Hitler’s Blitzkrieg. The British Expeditionary Force was soon beaten on the dunes of Dunkirk. Then followed a costly failure when the Nazi air offensive against Britain failed. Hitler then switched to this fatal Russian offensive.

The wholesale onslaught on the Jews in Germany and in the occupied territories in the form of the “Final solution” was started under the signature of Göring on 31 July, 1941. The policy was annihilation under the official euphemism of “resettlement”. As the German armies advanced into the heart of the former Tsarist “Pale” territories, the notorious Einsatzgruppen rounded up Jews by the thousands, driving them to pits and gulleys to be shot en masse. At one such action near Kiev around 70,000 Jews were killed.

Adolf Eichmann was head of the section tasked with the Jewish action. Decisions were taken to proceed with experiments to use Zyklon-B gas to create a number of dedicated death camps at Chelmno, Belzec and Treblinka; to expand Nazi concentration camps in Poland, notably Auschwitz and Birkenau; to consult the best German firms regarding crematorium design; to draw up timetables and rolling-stock arrangements for international railway transports. The ultimate death-toll will never be known, but at the Nurnberg Trials after the war, an estimate was made of 5.85 million. But although no accurate figures could be obtained, no responsible estimates have brought the figure down below 5 million. (See Norman Davies, op.cit. p.1023)

The Compounded Oppression of Two Dictatorships

Analysts often overlook the compounded suffering meeted out to the peoples of central and eastern Europe by two interacting totalitarian dictatorships: those of madmen Joseph Stalin and Adolf Hitler. Using similar tactics of deceit and oppression, their actions and strategies brought untold misery and death to millions of people from the Atlantic Coast to deep in the Russian hinterland. Their combined toll of human lives could well add up to around 90 million people.

The grim statistics of the loss of human lives during the Second World War tell its own tales. Military losses on the part of the Allied powers (USA excluded) totalled 10,026,945, with the Soviet Union losing 9 million, Britain 264,443, France 213,324 and Poland 123,178. The Axis Powers’ military losses totalled 4,335,232, with Germany losing 3,500,000, Romania 300,000, Italy 242,232 and Hungary 200,000.

Civilian losses on both sides totalled an estimated 27,000,000. Of those an estimated 16 million died in the Soviet Union, 6 million in Poland, 1,2 million in Yugoslavia, 350,000 in France, 800,000 in Germany, 290,000 in Hungary and 200,000 in Romania. The Netherlands lost 200,000, Belgium

117

76,000, Italy 153,000 and Britain only 92,673. The huge losses suffered in the Soviet Union include the losses suffered by Ukrainians, Byelorussians, Russians, Poles and Balts. As some of the figures quoted are estimates, they should only be used as general indicators of the scale of losses involved. (See Norman Davies, op.cit. p.1328)

To bring the enormous suffering experienced by the peoples of the Soviet Union into perspective, one should also look at the additional misery inflicted by the Stalin regime. The following table sets out the numbers of people killed in the period 1917 to 1953.

Categories of People Killed in Soviet Russia 1917-1953 (excluding war losses 1939-1945)

Minimum Maximum Civil War and Volga Famine 3,000,000 5,000,000 Political Repression 1920s tens of thousands Forced Collectivization after 1929 10,000,000 14,000,000 Ukrainian Terror-Famine, 1932-3 6,000,000 7,000,000 Great terror purges (1934-39) 1,000,000 Deportations to the Gulag, to 1937 10,000,000 Shootings and random executions 1937-39 1,000,000 Deportations from Poland, Baltic States and Romania 1939-1940 2,000,000 Foreign POWs 1,000,000 Deportations to the Gulag, 1939-45 7,000,000 Deportations of nationalities (Volga Germans, Chechens, Tartars) 1,000,000 Post-war screening of repatriates 5,000,000 6,000,000 Gross total (median estimate) c. 54 million (See Norman Davies, op.cit. p.1329)

The human misery caused under Stalin and Hitler leadership is immeasurable. On Stalin’s side, millions of hapless victims were annihilated under quota requirements, random denunciations and systematically executed purges. On Hitler’s side, the hapless victims were systematically eliminated on an industrial scale. Under the aegis of both, millions of soldiers and civilians were killed by destructive military equipment the world had never seen before.

Today’s Germans pick over their Nazi past remorsefully and remorselessly. Unfortunately, nothing like this is happening in contemporary Russia. Memorials to the victims of Stalinism are rare and Putin’s fingerprints are carefully erased. Few Russians are prepared to face up to their past as candidly as a new generation of Germans are prepared to do.

The Post-War Constitutional Model

In May 1949, the German Basic Law which provided for a new constitution was approved. The basic difference between the Weimar and the new Federal Constitution is the limitation of the powers of both the President and the legislature in favour of increased scope of action for the Chancellor. The President is no longer elected on a popular basis, but indirectly by a special convention consisting of the members of the lower house (Bundestag) plus an equal number of persons elected by the

118 parliaments of the Länder. The President can no longer declare a state of emergency and largely occupies the position of a ceremonial head of state.

Predominant legislative power is vested in the lower house, the Bundestag. Its members are elected by a proportional list system combined with single-member constituencies. But the Bundestag shares legislative powers with the federal , the Bundesrat, whose members represent their respective Länder governments. When deadlocks occur between the two chambers over draft laws, a joint arbitration committee proposes a compromise which both houses must then vote upon and accept. The Bundestag may override general laws rejected by the Bundesrat by a two-thirds majority. The constitution also requires a “constructive vote of no confidence” in the Chancellor, i.e. the Bundestag must elect a new Chancellor and Cabinet to replace an existing Chancellor and Cabinet. If a Chancellor asks for a vote of confidence and does not receive it, he (she) has the right to ask the President to dissolve the Bundestag. If the Bundestag cannot agree upon a successor within two weeks, new elections must be held.

Since 1949 this system has provided a high degree of stability. The political scene has been dominated by two major parties, the Christian Democratic Union (CDU), a conservative party, and the Social Democratic Party (SPD), a secularist and welfare-statist party. There are also two other minor parties, the Free Democratic Party and the Green Party – both of which have been acting in coalition with the two major parties.

The German Education Model

Since the Middle Ages, Germany developed a school system based on a three-tier structure: the Hauptschule (for students who hope to go on to an apprenticeship), the Realschule (whose graduates typically take white collar jobs) and the Gymnasium (awarding the Abitur that admits the holder to university). Only at the Grundschule (elementary level) are pupils from all ability groups taught together.

Education has for a long time been a battleground for ideologically inspired conflict. Leftists insist on a unified school system where all ability groups are thrown together – to ensure equal opportunities for all. The right insists on keeping the stratified system. Since the mid-sixties a number of Gesantschulen (comprehensive schools) were opened, but by 2010 only around 1,000 out of 20,000 secondary schools were Gesamtschulen.

In most Länder, following four years of elementary school, pupils are streamed into one of the three kinds of secondary schools. It is not impossible to change school systems, but pupils who cannot cope with Gymnasium standards may find themselves transferred, or “demoted” in the eyes of leftists. Parents tend to spend extra money for supplementary private tutoring. In 2006 reforms were introduced to give schools more say in curricula content and to introduce a wider range of “competencies”.

It would seem that Germany’s “dual system” of vocational training which combines classroom instruction with work experience is producing meaningful dividends for the economy. It has produced a large pool of skilled workers for the country since around 50 percent of German high-school students go on to dual training in one of 344 trades, from tanner to dental technician. Many of the courses are set by committees consisting of representatives of trade unions and employers’ federations. As a result German youth unemployment is much lower than elsewhere in the EU. Compared to many other

119 members of the EU, Germany has become a huge social and economic success based on the productive efficiency of its people.

German Social Solidarity

The philosophical underpinnings of German precepts of social solidarity under the umbrella of the state, have taken a convoluted path through the writings of many philosophers – from Kant and Hegel to Marx and Engels. Particularly since the Age of Enlightenment, philosophers have been preoccupied with attempts to identify the highest values for civilised societies. Few efforts have managed to proceed beyond the age-old trilogy that dates back to ancient Greece: the postulates of justice (fairness), equality and freedom (liberty).

Christianity has always proclaimed equality before God and there has never been doubt that equality has something to do with justice (as fairness). Equal treatment of equal cases has always appeared to be the very core of justice, but it was never made clear what is equal, i.e. what cases are equal and what real equality of treatment would mean and involve with reference to the redistribution of property, obligations to work or the allocation of rewards. Human beings are similar in many respects but different in many others. Freedom (or liberty) only gradually emerged as a realistic ideal, even though it was given philosophical priority since ancient times. Only after feudalism declined and the Renaissance and the Reformation fashioned a new self-assertion of the individual and its worth, was it possible to think of freedom as a natural, self-evident condition and inalienable right, rather than a special privilege granted by governmental authority. Freedom stood out as a primordial right in Kant’s philosophy. It has for long been recognised that freedom does not mean “unlimited scope for action”, since some actions clash and are incompatible. The solution Kant proposed was that each person’s freedom must be compatible with every other person’s freedom. But Kant’s formula does not contain clear-cut criteria by which to recognise this kind of freedom. It was left to the revolutionary socialism of Karl Marx and his followers to deflate the German debate on social solidarity from the metaphysical heights of the philosophers of the Age of Enlightenment to the adversarial class-based politics of the late 19th century.

Karl Marx was the harbinger of class-based politics. He was born into a Jewish family in Trier. His family converted to the Christian faith, an act Marx later described as “an entrance-ticket to European society”, and which was followed by at least 250,000 Jews in the 19th-century Germany. Marx used his anti-Jewish sentiments as the theoretical base for his theory of anti-capitalist communism. Marx argued that self interest was the profane basis of Judaism. It elevated money as the self-sufficient value of all things. Materialism has deprived the human world and nature of their own proper value and alienated the essence of man’s worth and existence. The god of the Jews has been secularised and has become the prime mover of this world. Using their money-power, the Jews emancipated themselves and enslaved the Christian world. The underlying disease is the religion of money, and its modern embodiment is capitalism. Workers and peasants are exploited not just by Jews but by the entire bourgeois-capitalist class. In this way Marx expanded his anti-Jewish sentiments to become an elaborate anti-capitalist theory. The making of money through trade and finance he considered essentially a parasitical and anti-social activity which is not merely based on race and religion but essentially on class distinctions. In time, socialism became the anti-Semitism of intellectuals. The Jewish caricature came to be seen as a capitalist monster for most of the second half of the 19th century.

120

Under the leadership of Chancellor Otto von Bismarck, Germany was the first state to introduce a system of compulsory state health insurance and old age pension in 1880. Bismarck was by no means attracted to socialist thought per se, but he considered these new welfarist policies as useful vote- winners in a world of rapidly widening electoral franchises and the popular appeal of Marxist revolutionary socialism. He realised that the poor outnumbered the rich at the ballot box. Bismarck argued that state health insurance and pension entitlements would engender a more stable state of mind amongst the great mass of unpropertied poor. Hence he advised conservative leaders that “whoever embraces these ideas will come to power”. And so the welfare state was conceived and given birth in the world of electoral politics.

In current German debates on social justice the word gerechtigkeit (justice) is often conflated with gleichkeit or gleichförmigkeit (equality). These debates are linked to many controversial issues such as social solidarity, minimum wages, welfare reform, executive pay, inheritance taxes, income tax levels and levies on wealth. At issue is how a country can provide basic security for the weak, reward talent and effort with higher incomes, provide access to opportunities and balance burdens between generations. The continuation of Germany’s success record ultimately depends on how it manages to find a balance between justice as fairness, equality of opportunity and creative liberty.

The Post-War Economic Model

Germany entered the post World War II era split in two parts as the victors divided the country according to the lines drawn by the occupation forces. The Soviet Union held the eastern parts which later became the Democratic Republic of Germany. The Western allies held the western part which became the Federal Republic of Germany. It took several decades until the end of the Cold War before Chancellor Helmut Kohl managed to secure the unification of East and after the fall of the Iron Curtain.

During the period 1946 to 1949, the German population went through terrible hardship: food shortages, unemployment and poverty were the order of the day. On the Western side the bulk of the population had to rely on American food aid. The economy was kept afloat with the aid of the American-funded Marshall Plan. General Clay, the Commander of the US military occupation, appointed as head of the American and British occupation zones an economist from the Freiburg School of “Ordoliberals”, Dr Ludwig Erhard. This proved to be an inspired choice. Erhard succeeded in getting approval for the re-introduction of the German currency, the D-mark, in 1948 – thereby enabling the German economy to take off.

With capital provided by the Marshall Plan, small and medium sized companies (Mittelstand) sprang up and became the backbone of the German economy. They were the first enterprises to start producing goods and services for the local market and to start employing the vast reservoir of unemployed people on a decentralised basis. Today they still employ 70 percent of the workforce, account for 46 percent of investment, creating 70 percent of all new jobs. As export companies they compete at the top end of the market by devising excellent technical solutions, supplying reliable goods and building long-lasting relationships with their customers. German manufacturers, small, medium-sized and large, became famous for their obsession with detail and strong emphasis on safety and durability. Their products were expensive but top quality. Their brand names became well known around the world: Mercedes Benz, BMW, Volkswagen, Bayer, BASF, Miele, Behr, Bechstein, Trumpf, Siemens, SAP, etc. Most are focused on improving their products rather than the fast moving, mass consumer markets ruled by pushy marketing. In time, Germany’s export prowess pushed it to the top

121 three exporters in the world, creating a current account surplus varying between 6 and 10 percent of GDP.

West Germany’s first Chancellor was the highly respected Konrad Adenhauer. He was ably assisted by Ludwig Erhard, his Finance Minister. The result was the German Wirtshaftswunder – an economic miracle. In the miracle years of 1950-1973, GDP grew by an annual average of 6 percent. With the bounty the government was enabled to extend the benefits of the welfare state. The core of Erhard’s Ordnungspolitik was the concept of a “social market economy” which came to describe the German economic model in the post-war years. It holds that competition was the best way to prevent public or private concentration of power, the best guarantee of political liberty as well as providing a superior economic model. It looked in many ways like a typical “mixed economy” because at all levels of government – federal, state (länder) and local – public ownership was broad in scope. In included transportation systems, telephone, telegraph and postal services as well as radio and television networks and utilities. Partial public ownership also extended to coal, iron, steel, shipbuilding and other manufacturing activities. But in contrast to the practice in France, the German state did not take control. It created a network of institutions to enable the market to function effectively. The economy operated under the tripartite management of government, business and labour, called Mitbestimmung. This corporatist system was embodied in supervisory boards called Betriebsrätte, consisting of representatives of all three sectors. It propelled Germany to the centre of the European economic order within a decade and firmly established it as the locomotive of European economic growth. Germany also provided job opportunities to millions of migrants (guest workers) from Turkey, Greece, Italy and Spain.

Chancellor Gerhard Schroeder took office with the support of the Green Party. The Chancellor squeaked home by polling just 6000 votes more, in an electorate of 61.4 million, than the conservative Christian Democrats and their Bavarian sister party, the Christian Social Union. Mr Schroeder persuaded the Ossis that he was their friend and offered cash and a spirit of national solidarity. With the two main parties finishing neck and neck with 38.5 percent of the national vote each, the Greens provided coalition support for a centre-left alliance. But the Greens were more focussed on environmental issues and less concerned about economic fundamentals.

The first years of the new millennium saw the German economy weakening. Germany was plagued by severe economic stagnation. Although its economy was the biggest in Europe, larger by a third than both Britain’s and France’s, the German economy showed the lowest growth figures in the whole of the European Union. The German economy was still struggling to fully absorb the bankrupt East Germans; it was stifled by a hugely restrictive and intrusive web of regulations, an inflexible and protected labour force and an over-generous welfare system. Unemployment stood at 10 percent of the workforce in 2002. Germany’s biggest firms were setting up plants abroad to manufacture products more cheaply. Bankruptcies increased to unprecedented levels. The hourly cost of labour in the manufacturing industry in Germany (including wages, social security and pension contributions) was 13 percent higher than the USA, 43 percent more than the UK and 59 percent more than in Spain. Its global share of exports declined from around 12 percent in 1992 to around 9 percent in 2002. Venerated firms became vulnerable to hostile foreign raiders. Commentators described Germany as the “sick man of Europe”.

During his second term which started in 2002, Chancellor Schroeder appointed Peter Hartz, a top VW manager to propose reform measures to Germany’s onerous labour-cum-welfare system. Mr Hartz’s proposals were aimed at making the labour market more flexible by reducing job protection and

122 lowering social security contributions for part-time jobs. It involved the creation of “personnel service agencies”, to be run by private enterprises which would take on people who had previously been unemployed and hired them out as temporary workers. Those who rejected an offer for employment from such an agency faced the risk of having their benefits docked. He also proposed “mini jobs” with private households where employers only paid a flat-rate of 10 percent social security contribution. Self-employed people were given tax incentives to set up companies with a minimum of paperwork and earnings of up to €25,000 taxed at a nominal rate of 10 percent. The onus was placed on job seekers to explain why a job offer was unsuitable before the state would provide welfare benefits.

Since 2006 the German economy experienced a healthy rebirth. Its exports exceeded any other country’s in the world – and remained world class. The economy started growing again at a rate close to 2 percent. After years of chronic depression, the general mood started to improve. The new Chancellor, Angela Merkel, engendered a feeling of optimism and the political-economic system started delivering the incremental changes expected by the methodical new Chancellor. A doctor of physics, she took office in 2006 at the head of a grand coalition of the two major parties, the CDU and the SPD. By taking “small steps” she succeeded in reducing the budget deficit, increasing the value- added tax rate from 16 percent to 19 percent and to curtail the veto rights of länder over federal affairs in return for gaining more local powers over education.

Despite the best efforts of the Hartz proposals, Germany’s labour market remained bureaucratic and over-protective of vested union interest. Wages are still set by national peak-level bargaining. The effect on labour costs is magnified by the way the country finances its welfare state: through a payroll tax with matching contributions from employees and employers. Contributions added up to 40 percent of gross income in 2006 compared to 28 percent in 1970. In effect, the de facto minimum wage is set by welfare benefits.

The Energiewende Policy

The German economy was saddled by the Energiewende policy by the left-wing government under Chancellor Gerhard Schroeder. He came to power after the fall of Chancellor Helmut Kohl in 1998 when the Social Democrats (SPD) formed a coalition with the Greens. Chancellor Schroeder was pressured by the Greens in 2000 to introduce an ambitious, but highly risky policy to cut carbon emissions with renewable energy. The initial plan was to phase out nuclear power by 2050.

After the Fukushima disaster in 2011, Chancellor Merkel, in a knee-jerk reaction, decided to order the immediate closure of seven reactors and to phase out the remaining 17 reactors by 2022. Germany reaffirmed its clean-energy goals which involved cutting greenhouse-gas emissions from 1990 levels by 40 percent by 2020 and by 80 percent by 2050. It meant those targets were to be met without nuclear power.

In 2010 Germany’s nuclear power stations supplied 23 percent of its electricity. Around 86 percent of its electricity was provided by an oligopoly of big suppliers: EON, EDBW, RWE and Vattenfall. Then the fortunes of Germany’s energy suppliers started to change very swiftly. First the European Commission (EC) forced them to spin off their transmission networks. Then the German government ordered them to take their nuclear plants – their most predictable source of revenue – offline by 2022. In addition, the power companies are obliged to pay a nuclear-fuel tax of around €2.3 billion a year until 2016.

123

The Energiewende policy also opened the door to thousands of heavily subsidised wind farms and solar arrays across the countryside. The renewable energy law entitles anyone who puts in a solar panel or a windmill to sell surplus power to the grid, receiving a generous “feed-in tariff”, guaranteed over 20 years. This gives electricity generated by renewable power priority over conventional power and pushed up its contribution to close to 20 percent of electricity output. The government’s plan is to push renewable energy’s target up to 35 percent by 2020.

The renewable energy push has unleashed an avalanche of “energy co-operatives” – from less than 100 in 2008 to around 600 in 2011. Solar parks have migrated in many cases from the countryside to the rooftops of family houses and apartment blocks. These changes have perked up local economies in windy areas and became important sources of revenue in smaller settlements as surplus power supplies are intermittently fed into the grid.

But the Energiewende has a serious downside too. It raises the cost of electricity, unsettles supply and is provoking rising resistance at the local level. The cost element is a very serious matter. Wholesale electricity prices have already risen significantly and are predicted to be 75 percent higher by 2025. Further capital expenditure on transmission lines is required to accommodate the many smaller input points. Intermittent wind and sun power create a need for back-up or base-load generators while making it difficult to develop business models that justify investment in such standby power generators.

From a macroeconomic perspective significant electricity price increases have cumulative effects on all facets of economic life: industrial costs, household budgets, transport costs, export prices, etc. In particular, it affects the competitive strength of the German industrial output, which is in many ways the mainstay of the German economy. In addition, the intermittent nature of wind and solar electricity generation affects the reliability of energy supply. It urgently requires backup generation facilities which, in the absence of nuclear generators, have to fall back on gas and coal power stations – defeating the reduction of carbon emission objectives. The subsidy-fed explosion of wind and solar power generation has distorted the allocation of capital investment. It not only pushed up industry’s electricity bills relative to its competitors, but exposed them to the instability of power interruptions. Since most energy intensive consumers are shielded from higher tariffs, other segments of society including ordinary folk, have to foot the bill. Subsidy distortions to the market destroy normal market dynamics and expose the energy industry to bureaucratic intervention, pressure from the renewable lobby and a wasteful transition from fossil and nuclear fuel to unreliable renewable energy. The price will by high and the risks are large. Extra subsidies may be needed to encourage investment in standby generation capacity.

Chancellor Angela Merkel has taken charge of the implementation of the Energiewende policy in the hope of staying ahead of the risks and pushing back against the cost escalation. Germany is now exposed to a greater dependence on the gas and oil exports of Putin’s Russia. If the possibility of increases in the charges for carbon emissions remains open ended, investment in standby gas and coal-fired power stations will remain uncertain. The total impact of the Energiewende policy has cast a cloud of uncertainty over the future of the German manufacturing prowess.

Chancellor Merkel is likely to lose the support of the bulk of her own political base, the Christian Democrats (CDU) and the Free Democrats (FPD). She now seems to rely on the support of her traditional opposition, the Social Democrats (SPD) and the Greens in addition to her own dwindling conservative supporters. These support groups are unlikely to return to nuclear power generation or

124 to support the introduction of the technology of “directional drilling” to tap coal-seam gas resources. Thereby Germany would not be allowed to tap into the potential of an important new technological breakthrough in the field of energy generation.

Germany and the Euro-zone Imbroglio

The German economic model together with its highly decentralised federal political system makes it an important partner and player in the European Union. But the German Chancellor cannot escape political-economic factors on the home front. Hence, the interplay of these factors determines what can be expected from German inputs and reactions to efforts to solve euro zone problems. Germans have their own unique set of expectations, priorities and contributions. It requires proper understanding of their post-war emergence as Europe’s economic powerhouse, its intricate sensitivities about its Nazi history and its understanding of the causes and implications of the euro zone’s current frailty.

It is within this intricate, fine-tuned, minutely structured and finely balanced society that the struggling Mediterranean members of the euro zone are seeking to find assistance to help them recover from decades of deficit spending, mountains of public debt and sclerotic economic growth. Germans do not appear to be fundamentally opposed to European co-operation. They have been core members of the “European project” since the Treaty of Rome in March 1957 which paved the way for the formation of the European Economic Community (EEC) which later transformed into the European Union (EU) with its 27 members today. The Germans also allowed their cherished Deutsche Mark to be replaced by the Euro as the legal tender in the 17 member euro zone currency union. They also agreed to accept the authority of the European Central Bank (ECB) to set a single interest rate for the whole euro zone. They appear to see no alternative to a continuation of the “European project” on condition that it proceeds “step by step” – in the words of Angela Merkel. But the Germans do not want to be its milk-cow and resent being made the object of special burdens and impositions. Monetary policy has been handed over to its own independent central bank, the Bundesbank. Their political system is based on a broad consensus: checks and balances to restrict the power of government, a set of social and political institutions which have primacy over the market (the law, the constitution and a supreme court to interpret it), the corporate governance principle of mitbestimmung and consensus politics relating to modest tax rises and cuts in the budget deficit. These are the factors that produce the stability and progress of German society. On what ground should these factors be jeopardised?

As a member of the EU and more directly the euro zone, Germany is now confronted with a conundrum: how to deal with the dangers of being dragged down into a political-economic abyss if they jump into the whirlpool to help out their profligate co-members along the Mediterranean. How much of its resources can Germany afford to sacrifice before it also drowns? Germany only recently recovered from lifting its 17 million Eastern compatriots out of the tribulations of Soviet Communist rule. Can the German economy with its 82 million citizens, with its head barely above the water, rescue its numerous neighbours along the Mediterranean with a combined total approaching 200 million? Even a strong swimmer can only save a limited number of drowning swimmers.

The European Disease

It is important to realise that all the social-democracies in Europe and elsewhere suffer from the same political-economic disease: living beyond their means, financed by compliant bond markets allowing over-spending governments to use chronic deficit budgeting to cover their spending on unaffordable

125 welfarist benefits and oversized bureaucracies. In all the advanced countries, both public and private households are saddled with heavy debt burdens as a result of their deep-rooted addiction to credit financing - without proper regard to affordability. Sooner or later the reality of bottom line discipline comes into play.

When the GFC struck in 2008, many advanced economies were already vulnerable to contagion on many fronts. Government debt levels were already unsustainably high as a result of the cumulative effects of deficit budgeting over many years. Governments were already committed to high levels of spending on welfarist benefits, public sector employment and, in some cases, heavy defence or security spending. In most cases spending levels increased much faster than income levels when the GFC virtually pulled the plug on economic growth.

The average growth rate of every major advanced country has been on the decline for several decades. The economic sclerosis affecting the advanced economies manifests itself in many symptoms: unemployment rates around more than 10 percent, chronic budget deficits with levels of public spending approaching, or exceeding in several cases, 50 percent of national output, social welfare systems placing an unsustainable tax burden on society. Public debt levels have created “debt traps” where interest rates charged by bond financiers have raised to levels where governments have to pay more interest than they can service (as in the cases of Greece, Portugal, Spain, Italy and France). Inflationary expectations are undermining confidence to invest in job-creating economic growth. In most social democracies, public expenditure burdens have risen faster than economic growth.

The critical variables are the levels of confidence of the buyers of government bonds, the interest rates they require and the repayment terms involved. These requirements, in turn, depend on perceptions of the relevant government’s fiscal rectitude and the economic potential of the country and the willingness and ability of its taxpayers to shoulder the commitments made by their governments. A country with a firm growth potential, a stable political system and a convincing record of sound economic management is likelier to raise loans domestically or internationally to cover its debt requirements. The ability of a country to meet its debt obligations depends on its projected disposable income stream. You cannot spend yourself out of debt with money you don’t have.

The Pressures of the World of Finance

The Economist, an influential newspaper owned by financial interests in the UK and beholden to financial interests in the City of London and in Wall Street, has systematically been propagating the idea that “... the fate of the world economy depends on Germany’s Chancellor, Angela Merkel” (see The Economist, June 9th, 2012, p.13). It argued that the fate of the Mediterranean stragglers has been compounded by errors in Europe’s creditor countries: their overwhelming focus on austerity, a succession of half-baked rescue plans, and the refusal to lay out a clear path for the fiscal and banking integration that is needed for the single currency to survive. Since Germany has largely determined this response, The Economist preferred to pick on Angela Merkel, completely ignoring the fact that the mayhem of the GFC was unleashed in the USA’s Wall Street in the first place. It argued that a “consensus” has developed that Merkel had to shift from austerity to far greater focus on economic growth (ignoring Germany’s strong growth record), complementing the single currency with a banking union (with euro wide deposit insurance, bank oversight and joint means for the recapitalisation of failing banks) and embracing debt mutualisation to cover the mistakes of wayward members. This message has been repeated in several issues of The Economist and refrained in the

126

Anglophone media. It is clear that the interests of the bondholders are deeply involved in this sustained propaganda campaign.

In the world of finance, it is the investors in the bond market (largely sovereign wealth funds, private equity funds, hedge funds and pension funds) which are the “Scarlet Pimpernels” who operate with impunity under cover in the dark pools of the shadow banking system without proper public accountability. They are the financiers of the deficits of government institutions, large companies and banks. They can manipulate interest rates in capital markets by virtue of sheer size, share prices in stock markets through high-frequency algorithmic trading and negotiate secured debt instruments such as covered bonds or derivatives where creditors are first in line to be repaid. They can plunge a country into more debt in order to repay its banking system’s debt. Imposing losses or “hair cuts” on bondholders is not something that meetings of finance ministers are likely to do because of their dependency on bond markets to finance their budget deficits. To avoid moral hazard, bondholders should be required to bear the losses they incur when they make bad loan decisions.

Germany’s Challenges

The critical question for Germany is how to safeguard its own national interests. Will Merkel’s demands for austerity and her refusal to bail out her peers in the euro zone bring about structural reform in Europe? Is it appropriate to take the lead to achieve deeper political-economic integration in the euro zone? Deeper integration means handing more decision-making power to Brussels and saddling Germans with the debts incurred by others. It would move the EU at large or the euro zone in particular towards a federal state, with a common fiscal policy and more representative political structures. Berlin argues that European leaders like to talk of mutualising national liabilities, but avoid mentioning sharing national sovereignty. Are the citizens of the euro zone countries uber haupt ready to give up more sovereignty to save the Euro? A process of deeper euro zone integration lacks a clear democratic mandate – not only in Germany, but also in France and other member countries.

Jens Weidemann, President of the German Bundesbank and former economic adviser to Angela Merkel, recently expressed the German perspective as clearly and unambiguously as can be expected from a central banker: “Communalising risks weakens the foundations of a currency union based on fiscal responsibility.” It is unrealistic to expect the German taxpayer to shoulder responsibility for profligate public spending in the Mediterranean countries with workers retiring at ages below 60 whereas Germans have to work until the age of 67 and where workers work fewer hours per week than in Germany. No German government can realistically expect to remain in power when it is pressured by foreign interests to commit Germany to accept such debt and risk sharing – in whatever form it is disguised by deft financial engineering devised in the City of London or in Wall Street, USA. Moreover, it is unrealistic to expect the German economy to successfully carry the burden of lifting the euro zone stragglers out of the economic holes they have dug for themselves. During the first two quarters of 2012, the German economy grew by only 0.3 percent. France, Italy and Spain did not show any growth at all during the same period.

It requires more objective and independent analysis to determine how the responsibility for fixing the economic distress of the euro zone should be apportioned. If the economic future of the world at large is at stake, then it is a problem requiring the serious attention of all stakeholders: creditor nations as well as trading partners and bondholders. In the world of business there are insolvency laws to deal with the distressed assets of bankrupt enterprises and with the status of creditors who may be financiers, suppliers or employees. In some cases it is possible to steer a business out of trouble by

127 cutting costs, mobilising productive assets and nurturing markets for their products and services. Ultimately, it is the discipline of the bottom line that determines success or failure.

Even under the most favourable conditions, the German economy finds itself in a precarious balance. It currently faces the Herculean task to pull its elaborate and highly subsidised carbon emissions reduction scheme into a system providing reliable and affordable electricity. Much could go wrong with this ill-conceived intervention that was initiated by the former left-wing coalition government between the Social Democrats and the Greens. It is unrealistic to expect the German people to volunteer their support for an unlimited transfer of assets to their profligate EU neighbours. It is still more unrealistic to expect that the German economy would be strong enough to stand behind trillions of euro’s worth of European debt.

128

9. The Colonial Dutch and British Influence in South Africa (July 2015)

The discovery of the compass by the end of the 13th century enabled seafarers to undertake long voyages out of sight of land. The first Europeans to navigate their way along the West Coast of Africa were the Portuguese, searching for a sea route to the Far East. Bartholomew Diaz de Novaes set out in August 1487 from Portugal. En route, he touched points along the African continent never before seen by Europeans and after he turned Cabo Tormentoso (Cape of Storms) he landed at what he called Rio de Infante (Mossel Bay) and triumphantly returned home to report that it is possible to round the southern tip of Africa. The first business expedition to the East was led by Vasco da Gama who left Lisbon in 1497 with four ships. After several months he reached the port of Mozambique where he encountered Muslim traders. His small fleet progressed northwards where he discovered a thriving Arab port at Mombasa. At Malindi he employed a Hindu pilot to take him across the Arabian Sea to Calicut in India.

In 1581 Jan Huygen van Linschoten was the first Dutchman to complete a return voyage to India. Then in 1595, Cornelis and Frederik de Houtman also completed a successful return voyage and thus started the Dutch involvement. Soon the need arose for a halfway station where the sailors, desperately affected by scurvy, could find fresh fruit and vegetables.

By the 1650s, the Dutch were the world’s leading trading nation. The Vereenigde Oostindische Compagnie (VOC) was the world’s largest trading enterprise with a strong trading post in Jakarta (Batavia). In 1652, Jan van Riebeeck established a refreshment station at Cape Town. Apart from the area around Cape Town, Europeans initially had limited contact with the African interior. There were few natural harbours and much of the coast was either dry desert or wet jungle, making access hazardous.

From the 1760s onward, Europeans made determined efforts to explore the interior of Africa. The major rivers of the Niger and Nile were mapped by the British and the French explored the Sahara Desert and reached Timbuktu, a city which became famous as a trade centre and a centre of learning since the 1300s. Still later in the 1800s, the Scotsman David Livingstone, who was a missionary and a doctor, explored the Zambezi River and travelled the continent from east to west. He was followed by the American adventurer, Henry Stanley, who explored the Great Lakes region and then sailed down the last unknown river in Africa, the River Congo. Both Livingstone and Stanley are thought to have paved the way for the European colonisation in their “scramble for Africa” in the late 19th century.

In November 1884, the German Chancellor, Otto von Bismarck, convened a conference on Africa in Berlin, ostensibly intended to ensure free trade in Africa. The real purpose of the conference was to “define the conditions under which future territorial annexations in Africa might be recognised – in effect, a charter for the partition of Africa into spheres of influence based on nothing more legitimate than their effective occupation.

Historian Niall Ferguson describes the course of events in the following terms: “Across Africa the story repeated itself, chiefs hoodwinked, tribes dispossessed, inheritances signed away with a thumbprint or a shaky cross and any resistance mown down by the Maxim gun.” (Empire – How Britain Made the Modern World, Penguin Books, 2004, p.239)

129

Ferguson goes on to describe how one by one the nations of Africa were subjugated – the Zulus, the Matabele, the Mashonas, the kingdoms of Niger, the Islamic principality of Kano, the Dinkas and the Masai, the Sudanese Muslims, Benin and Bechuana. By the beginning of the next century, the carve-up was complete. The British had all but realised Rhodes’ vision of unbroken possession from Cape to Cairo. Their African empire stretched northwards from the Cape Colony through Natal, Bechuanaland, Rhodesia, Nyasaland, and southwards from Egypt, through the Sudan, Uganda and Kenya. Tanganyika, the German possession, was the only missing link. The Germans had South West Africa (Namibia), Cameroon and Togo. Britain had also acquired the Gambia, Sierra Leone, the Gold Coast and Nigeria in West Africa as well as the north of Somaliland. But West Africa was mostly in French possession. From Tunis and Algeria in the north, downwards through Mauritania, Senegal, French Sudan, Guinea, the Ivory Coast, Upper Volta, Dahomey, Niger, Chad, the French Congo and Gabon, the greater part of West Africa was in French hands. Their only eastern possession was the island of Madagascar. Besides Mozambique and Angola, Portugal remained an enclave in Guinea. Italy acquired Libya, Eritrea and most of Somaliland. The Belgian King owned the vast territory of Congo. Spain had Rio de Oro. Africa was almost entirely in European hands, and the lion’s share belonged to Britain.

The European Settlement at the Cape

The Dutch set up a refreshment station at the Cape in 1652 and then started occupying the surrounding areas. But for several decades before the Dutch settlement at the Cape was established, various European seafarers used landing posts along the surrounding coasts as a stop over for water and meat supplies. The British, the biggest colonists of all history, arrived first in 1795, but more permanently in 1806 when they drove out the Dutch occupiers and started the Anglicisation of the bulk of Southern Africa. Each wave of colonists brought in their wake their own customs, institutions, languages, practices, knowledge and other cultural attributes.

Over several centuries thousands of passengers and crew aboard Dutch, English, French, Portuguese, Scandinavian and other trading vessels stopped first at Cape Town and later on at other port cities. The Cape was a linchpin in all European shipping networks connected to Asia. Cape Town became the key point of entry for European and Indian Ocean world influences. The trading vessels brought in thousands of migrants from many parts of the world. These included VOC officials, soldiers and artisans from the Netherlands, Germany, Scandinavia and the Baltic; passing crewmen and passengers from across the Atlantic world; slaves from Mozambique, Madagascar, India, Sri Lanka, Java, Bali, Sulawesi and other Indonesian islands; and exiles and convicts from Asian trading posts. Some were temporary sojourners, but many stayed permanently and their descendants became a permanent part of the settler population. In many ways Cape Town became one of the most racially and culturally diverse ‘melting pots’ of the colonial era.

In today’s world a traveller by air over Southern Africa, from Angola, Zambia, Zimbabwe and Mozambique to Namibia, Botswana, Swaziland, Lesotho and South Africa would notice many visible traces of the colonial past. Most visible would be the large cities with their chequerboard layout, port cities with their bustling trading infrastructure, railway and road networks, electric power stations and transmission lines, large dams and irrigation networks, communication towers and discs, large mine dumps and open-cut mines, fenced-in farmland covered with European cattle and sheep breeds, skyscrapers and housing estates, large sports arenas, numerous soccer and rugby playing fields, demarcated nature reserves, large hospitals and cemeteries, shopping centres, churches and cathedrals, many school buildings and university campuses. Less visible, but very real in its impact, are the constitutional principles, financial practices, scientific methods, education standards, language

130 facilities, communication channels, legal standards, technologies and the multitude of cultural ways of doing things.

The Dutch Influence

In April 1652 a group of around 90 Europeans employed by the Vereenigde Oostindische Compagnie (VOC) occupied the area around Table Bay in order to set up a refreshment station for their commercial fleet carrying the spice trade between East India and Europe. At the time the spices – including black pepper, cloves, nutmeg, cinnamon and mace – were essential for flavouring and preserving food. For centuries these commodities had come overland from Asia to Europe along the Spice Road. But with the Portuguese discovery of the sea route to the East Indies via the Cape of Good Hope, attractive new business opportunities opened up as Dutch merchants gradually succeeded to wrest control of the lucrative Asian spice trade from Portugal and Spain early in the 17th century.

It is significant to note that from the beginning of the 17th century to the start of the French Revolution in 1789, the United Provinces of the Netherlands rose in commercial prominence to become a predominant trading power in the world – despite its small land area and the limited size of its population. But its commercial prowess was in large measure driven by its success in financial innovation: the Amsterdam Exchange Bank (Wisselbank) in 1607, the first limited liability joint-stock trading company and the biggest corporation of its era, and also the first stock market (Beurs) where the shares of the company could be bought and sold in 1608. The financial institutions represented by company, bourse and bank provided the triangular foundation for a new kind of economy. (See Niall Ferguson, The Ascent of Money – a Financial History of the World, Allen Lane, London, 2008, pp. 127-136)

The 17th century is often described as the Golden Age of the Dutch Republic. Its merchants were the most successful businessmen in Europe and the VOC was the world’s largest trading corporation. It operated under a charter from the States-General (the Dutch government) and was given sovereign rights in and east of the Cape of Good Hope. By mid-century it was the dominant maritime power in south-east Asia with its fleet numbering some 6,000 ships totalling around 600,000 tons and manned by around 48,000 sailors.

The object of the VOC’s directors, called the Heeren XVII (Lords Seventeen), for the establishment of the refreshment post at the Cape was to obtain fresh food for shipping fleets and to treat scurvy- stricken seamen – not to establish a costly, expansive new colony. A fort was built, a small number of ‘free burgers’ (employees released from their contracts) were given small pieces of farm land and slaves were imported to work on property and on ‘free burgher’ farm land. In time a growing number of slaves were brought from the East to work as masons, carpenters, tailors, cooks and other trades. The Cape slaves came from diverse linguistic, religious and social backgrounds. A few came from African territories, but more from Madagascar and still more from Indonesia, India and Ceylon – including a large minority of Muslims. From 1711 onwards, there were more slaves than free burgers in the colony. There were always three to four times as many male as female slaves.

Initially the slaves were exclusively employed by the VOC, but in time as more settlers were given farm land, the wealthy wheat and wine growers in nearby districts also acquired slaves to work on their farms and in their households. Stock farmers in more remote rural districts employed only a small number of slaves together with Khoikhoi herders. By the end of the Dutch colonial rule at the Cape towards 1800, there were 25,000 slaves in the colony, compared to around 22,000 European settlers.

131

According to Hermann Giliomee’s assessment in New History of South Africa (2007), the most important influences exercised by the first European immigrants were Roman-Dutch law, the Reformed Christian Religion, early capitalism as well as two opposing contradictory concepts of society: the one an egalitarian tendency emanating from Europe; the other, hierarchical tendencies and practices the VOC had developed in running its colonies in the East. It should be added that these perceptions and attitudes, in time, found expression in the community and race relations prevalent at the Cape.

In addition, the Dutch connection brought in its wake a wide ranging impact on the social, economic, cultural, intellectual and political life in what is today called South Africa which lasted more than 350 years. This influence is still carried along by more or less 4 million descendants to this day and is highly visible in all spheres of life.

(See Karel Schoeman – Die Suidhoek van Afrika – Geskrifte oor Suid-Afrika uit die Nederlandse Tyd 1652-1806, Protea Boekhuis, Pretoria, 2002, for more than a hundred contemporary writings about the Cape during the Dutch colonial period)

The French Huguenots

When van Riebeeck landed at the Cape, his entourage, which included a few women and children, were mostly soldiers and petty officials of the VOC. They were mainly Dutch but partly German in nationality. These early colonists also brought with them the values, skills, knowledge, institutions and life styles that they acquired in the countries where they were born and raised.

When took over as governor in 1679, the Cape had gradually transformed from a refreshment post to an agricultural colony. By then the population was still small: 287 officials, 87 ‘free‘, 55 women, 117 Dutch and ‘mixed’ children, 30 ‘white’ labourers and 191 slaves. Van der Stel initiated programmes to expand the settlement by making new grants of land around Stellenbosch and Drakenstein, though only a handful of immigrants came from the Netherlands. Special efforts were made to bring orphaned girls from the Netherlands to increase the number of women – but without much success.

The situation changed when in 1688 and 1689 a total of around 200 French Huguenots came to the Cape and were settled as family units. These settlers were Calvinist Protestant Christians who fled persecution in France by the Catholic authorities under the Edict of Nantes in 1685. Although they represented only about one-quarter of the ‘free burghers’ in the Cape at the time, the French Huguenots exerted an important influence in view of their settlement as family units – not as single males. In order to enhance the process of assimilation, the French Huguenots were settled in several districts: Stellenbosch, Drakenstein, Franschoek, Paarl and Wagenmakersvallei (today’s Wellington).

The French Huguenots should be considered as the founders of the viticulture and wine industry in South Africa. They also enhanced the productivity of wheat cultivation. They were allowed to bring their own parson, Pierre Simond, and their own schoolmaster, Paul Roux. Their Calvinist principles and religion made a deep imprint on the values and lifestyles of the local colonists. Famous Voortrekker leaders such as Piet Retief and Sarel Cilliers were direct descendants of the Huguenots. Today there are dozens of French family names in circulation in the family trees of South Africans.

132

By 1793 there were 13,830 burgers (4,032 men, 2,730 women and 7,068 children) in the Cape colony. These were miniscule numbers compared with the scale of European settlement in the Americas by that time, but their enterprising and innovative influence soon became incalculable.

Innovative Impact

Van Riebeeck and his successors intended that the free burgers should practice intensive agriculture along Dutch lines, but lacking adequate and skilled labour, intensive agriculture was not successful. Some became artisans and traders in Cape Town, catering to the needs of visiting French, English and Scandinavian ships as well as to the Dutch fleets that stopped at the Cape each year. Those remaining on the land acquired larger holdings and became mixed farmers, producing grain and wine but also pasturing sheep and cattle beyond the limits of their land grants. This activity foreshadowed the later frontier pushing done by the ‘trekboere’.

The immigration of Germans was less heralded than the French but considerably more important in numerical terms. During the Dutch-controlled colonial period, as much as 15,000 Germans settled in the Cape. Many were employees of the VOC as soldiers, officials, tradesmen and a large number became private teachers.

The Cape settler community led a modest life, away from urban civilisation and the centres of culture and learning. A relatively small population was spread over a large geographical area so that regular church and school attendance opportunities were limited. By 1800 the total European-descent population of around 22,000 was spread over four stretched out districts: Cape Town, Stellenbosch, Swellendam and Graaff-Reinet – an area larger than the Netherlands and Belgium combined.

Exclusivity in Social Identity and Race Relations

Although race as a distinct legal category did not manifest itself in South African political life before the 19th century, 17th century Europeans appear to have been keenly conscious of the correspondence between differences in biological characteristics such as skin pigmentation and differences in culture. The early modern Europeans considered themselves as being heirs to and beneficiaries of a superior civilisation and religion. Charles Boxer, a historian of European colonialism found that Portuguese, Spaniards, English, French and Dutch were all convinced that a Christian European was ipso facto superior to members of another race. (See Charles Boxer, The Dutch Seaborne Empire, London, Hutchinson, 1965, p.233)

Giliomee describes how some people in Europe and the Cape sought to find Biblical sanction for their prejudices: the legends on Noah’s curse on Canaan and the children of Ham were used as justification for racial slavery and servitude. During the 17th century, Europeans attempted to justify the slavery of Africans by associating Ham with black people. In Shakespearean England, stereotypes about blacks being sinful, lustful and murderous abounded, but the same imagery also applied to Jews. The Dutch also deemed Africans to be godless and licentious heathens: people fundamentally different from Europeans.

(For a comprehensive survey of the European society in the 17th century, see Karel Schoeman, Patrisiërs en Prinse – Die Europese Samelewing en die Stigting van ‘n Kolonie aan die Kaap, 1619- 1715, Protea Boekhuis, Pretoria, 2008)

133

The prejudices the early colonists brought with them were those typical of their age. Indigenous blacks were considered as ‘heathen’ and ‘inferior’ in contrast to their own ‘Christian’ and ‘superior’ European identity. In these matters they were paralleled by the history of colonists in other parts of the world: by British colonists in Virginia or New England, Jamaica or Trinidad and later in Australia and New Zealand; by Spanish colonists in Mexico, Peru and elsewhere in Latin America; and by Portuguese colonists in Angola, Mozambique, Goa and Brazil. To this day Brazilians struggle to deal with the vestiges of race discrimination in their society in the face of a comprehensive programme of ‘affirmative action’ or ‘reverse discrimination’.

From the perspective of the indigenous locals, the colonists were considered as ‘intruders’ that came across the sea to occupy and invade their land and to take away their livestock and hunting grounds. The intruders were visibly different, spoke different languages, practised a different religion, followed a different lifestyle and value system, and, carried different weaponry.

From the earliest encounters between the indigenous and intruding groups, a pattern of differentiated group identities emerged which, in time, became the hallmark of community and race relations. Group loyalties based upon race, ethnic identity, language, caste and religion started to emerge that resulted in conflict patterns. Disparities between ‘us’ and ‘them’ started to emerge as patterns of solidarity which was further complicated by the introduction of growing numbers of slaves. Although the VOC in the Netherlands impressed upon the Cape authorities to promote peaceful and harmonious relations with the indigenous peoples, the relatively small number of settlers succeeded in imposing themselves in a culturally dominant position.

Race – even as resting on subjective human sentiment and not entirely discreet as a result of early examples of miscegenation – became a basis of social differentiation on account of the instant recognition of variance in skin pigmentation, hair structure and facial characteristics. But racial identity only became pronounced as the socio-cultural setting became more multi-racial. A set of stereotypes, usually pejorative, developed amongst colonisers as well as amongst the subjugated.

The introduction of slavery had an important impact on social stratification at the Cape. Hermann Giliomee in The Afrikaners – Biography of a People, Tafelberg, 2005, p.12, states that: “It transformed the social ethos of society by defining freedom and the status hierarchy. High status belonged to those who were free, kept slaves and did not have to work with their hands. To be a servant doing manual work in the employ of someone else carried the connotation of slave status, which the burghers at the Cape did everything possible to avoid... Almost unobtrusively the institution of slavery took a grip on social and economic life.”

According to Giliomee, the Cape also acquired influences from trading stations in the East: especially Ceylon (Sri Lanka) and Batavia (Java). It was common for Dutchmen assigned to trading posts in the East to develop liaisons with Eastern women. The offspring born out of these associations were considered part of the European community – as was Simon van der Stel who became governor at the Cape in 1679. It was claimed that almost everyone with some pretentions to status in Batavia, keenly aspired to the role of a slave owner who abstained from manual labour. Other characteristics acquired from Batavia included the power that officials wielded, the strict company etiquette, the impotence of the church as an institution, the defective education, the conspicuous consumption, the dependence on slave labour, the use of Malay and Portuguese as lingua franca in the early decades and the introduction of the office of Fiscal to maintain law and order. Also transmitted were subtler influences

134 from Batavia such as ‘the pace-of-life’, graces, aristocratic attitudes mixed with the technology, fashions and the Christian cultural traits of European society.

Slaves in private ownership were generally better off than those owned by the VOC. They were better clothed and better educated. The VOC slaves were housed in the ‘Slave Lodge’ where they were educated in the Reformed catechism. Some private slave owners kept up to twenty slaves, but the average number was three adult slaves who lived in close interaction with the slave-owning families. Slaves out of Africa and Madagascar were valued less because of their limited communication skills. Malayan slaves were in higher demand for their trade skills. Slaves born at the Cape were in higher demand still because they were less likely to escape than the imported ones.

Children born out of European-slave interbreeding were colloquially called ‘Bastaarden’ (or ‘Basters’). Although marriages between burghers and slaves were officially forbidden extra-marital associations were common, particularly involving slave women with soldiers, sailors and burghers. In terms of Roman law, the child of an extra-marital union acquired the status of the mother. Commissioner Van Reede determined that children born from extra-marital relationships could be trained as tradesmen and ‘set free’ at the age of 25.

The ‘Baster’-children did not easily fit into any of the major groupings at the Cape as a result of existing prejudices, some individuals nevertheless became successful and wealthy. By the end of the 18th century there were more than 2,000 ‘Basters’ who were recognised as baptised Christians. Some ‘Basters’, such as Adam Kok, moved northwards across the Gariep (Orange) River and collected remnants of scattered Khoikhoi groups to form new communities, such as the ‘Griekwa’.

(See Johan de Villiers, “Die Nederlandse Era aan die Kaap” in Fransjohan Pretorius, ed. Geskiedenis van Suid-Afrika – Van Voortye tot Vandag, Tafelberg, Kaapstad, pp.51-53)

J.A. Heese’s genealogical research has revealed that around seven percent of Afrikaner families have a non-European ancestor (See J.A. Heese, Die Herkoms van die Afrikaner, 1657-1857, Kaapstad, Balkena, 1971). Since there was more than twice as many men than women during the first seventy-five years, marriages between white men and fair-skinned non-European women did occur. Most liaisons occurred outside wedlock resulting in large-scale miscegenation in the form of casual sex, especially in the slave lodge frequented by local European men as well as the thousands of sailors and soldiers who came to the Cape. Many of their descendants were absorbed into the community. Males of non- European origin also entered the ‘white society’. Burgher status was not racially exclusive and access through birth, marriage or purchase enabled at least some descendants of freed slaves to obtain burgher status. Identity was based on a pragmatic mix of status, honour and networking.

In time, as more prominent racial, cultural, ethnic and economic class distinctions emerged, these social identities became a fertile seedbed for inter-group cleavages and conflict. The different groups varied in terms of the links binding their members together but in most cases their solidarity depended on such factors as an awareness of a common culture, a common ancestry, a shared historical experience and a common language. The degree to which individuals were bound to cultural or sub-cultural groups by common loyalty, was like loyalty itself, a matter of degree.

Based on systematic social history research involving several leading South African and international historians, a publication edited by Nigel Warden, Cape Town: Between East and West, Jacana Media, Pretoria, 2012, provides a thematic focus on the complex interaction of social identities during the

135

Dutch colonial period at the Cape. It describes the cultural landscape in the Cape Colony with a sharp focus on the interaction of VOC officials, the free burgher society and its under classes, south-east Asian convicts, Chinese exiles, slaves from Batavia, India, Madagascar and Africa as well as a constant stream of thousands of soldiers, sailors and passengers who crowded the streets of Cape Town. It positions the Cape in the wider context of the Atlantic and Indian oceans and explores its complex connections with Europe, Asia and Africa. It shows how the Cape settlement was shaped by forces beyond its immediate geographical confines, being part of a wide network of interchanges of people, goods and ideas across continents and oceans.

The wide array of descendants of the early settlers at the Cape drew from the cultural repertoire of their homelands to build new identities. This is particularly true for slaves and their descendants who formed around 40 percent of the permanent population at the Cape throughout the 18th century. Various contributors to Nigel Warden’s 2012 publication show how distinctive slave linguistic, cultural and religious beliefs were manifested in the colony. Men and women constructed their identities with whatever resources they had and adapted themselves with a high degree of innovation and flexibility.

After the defeat of the VOC in the period 1795-1806, much was to change in the Cape colony. New ideas about a particular type of British colonial respectability entered the social and cultural environment. Some locals consciously adapted themselves to this. Others deliberately rejected it. They came to identify themselves more consciously as locals as the process of local self-fashioning continued.

The South African Dutch speaking group, which gradually morphed into Afrikaans-speakers, essentially emerged as a result of the confluence of people originating not only from the Netherlands but also Germany, France, Britain and an admixture of slaves. This biological and cultural synthesis created a new ethnic group that considered themselves as resolutely indigenous and which gave rise to one of the first nationalist movements on the African continent as well as a language of their own. It is the only African language that contains the name of the African continent in its appellation - Afrikaans. Today hardly any Afrikaans-speakers are foreign-born.

The factor of race (regardless how imperfectly defined), being the most ‘visible’ determinant of identity and group affiliation, acted from the earliest encounters between the various population groups as a differentiator of cultural values, economic interests and social status. In consequence certain ‘group images’ were stereotyped. These identification patterns were reinforced by military hostilities and by folk beliefs and practices. In time the differentiation based on race or colour was formalised in the occupation, religious and political structures. Racial separation, buttressed by legal and social barriers formed the basis of ‘apartheid’ policies in South Africa – the Dutch word for segregation.

Impact on Indigenous Peoples

The two main indigenous peoples living in the Cape area at the time were the San (‘Bushmen’) and the Khoikhoi (Khoekhoen). The ‘Bushmen’, as the San were colloquially called, were the descendants of the original settlers in Southern Africa. They were hunter gatherers who were constantly at odds with grazier tribes, such as the Khoikoi – and later with the colonists.

In 1688 the first major attack by Bushmen on colonist herdsmen took place in the Drakenstein district which led to an ongoing war against the Bushmen. Soldiers were garrisoned in the Drakenstein and

136 arrangements were made to strengthen their defences with freeburgher commandos. Mountain passes were patrolled and sporadic violent conflicts occurred. In 1739 and again in 1754 commandos were mobilised to confront the marauding Bushmen.

Periodic stock theft and commando reprisals continued as the inward migration of the colonists expanded into the Roggeveld, Hantam, Nuweveld and Sneeuberg regions. On both sides of the conflict many people were killed but in 1774 around 500 Bushmen were killed and members of their families imprisoned. By 1795 these conflicts declined as the surviving Bushmen retreated northwards to areas around the ‘Gariep’ (Orange) River and beyond.

The Khoikhoi (or Khoekhoen), colloquially known as the ‘Hottentotten’, were essentially graziers, herding cattle and sheep. They were also in constant battle with marauding Bushmen since ancient times, but soon also came into conflict with the ‘trekboer’ colonists. Inter-tribe battles also took a heavy toll, but the heaviest toll on their numbers was taken by several waves of smallpox epidemics: in 1713, 1755 and 1767. The first outbreak of smallpox occurred on 18 April 1713 at the VOC slave lodge in Cape Town. It was thought that the disease was transferred by the washing of passing seafarers. Several travellers, such as Francois Valentijn and Anders Sparrman, reported evidence of large losses of life in the inland regions. According to a census taken in 1805, there were only around 20,000 Khoikhoi left in the Cape Colony.

Traditionally the Khoikhoi were nomadic herders who moved around following the availability of grazing and water supplies. As a growing number of farms were allocated by the authorities to the Cape colonists, conflicts with the nomadic Khoikhoi arose. By 1714 a total of around 400 farms had been allocated for wine and wheat production and an unknown number of farms for grazing purposes. Farms were also issued with grazing licences in specific areas not allocated for occupancy farming. Many Khoikhoi found employment with the farmers as herdsmen and farmhands. Missionary societies established several missionary stations such as Baviaanskloof (later called Genadendal), Elim, Suurbraak, Mamre, Zoar, Wupperthal etc.

Detribalised Khoikhoi gradually became marginalised members of the Cape colonial community. Inter- marriage with colonists occurred and gave rise to the Cape Coloured people of today. Some Khoikhoi intermingled with both slaves and colonists. In 1781 a ‘Corps Bastaard Hottentotten’ and later the ‘Corps Pandoeren’ were created to serve as part of the defence arrangements at the Cape. Khoikhoi elements were also incorporated in the ‘commando system’ in the frontier areas.

(See Johan de Villiers, “Die Nederlandse Era aan die Kaap” in Fransjohan Pretorius, op.cit. pp.47-50)

Religion and Education

When considering the Dutch legacy in religious matters it is necessary to pay attention to the combined role of the Dutch Bible (Statenbijbel), the Dutch Reformed Church as a social institution, the Calvinist theology as a Protestant religious doctrine and the normative value system derived from these elements of religion. From the 17th century to early in the 20th century, religion and education developed hand in hand. Most efforts to improve educational standards were initiated on community level by church leaders and congregations.

An important milestone in the emergence of standard Dutch (Algemeen Beschaafde Nederlands) was the translation of the Bible from Hebrew and German sources into what was believed to be a non-

137 dialectical Dutch language. Each Dutch province had its own distinct dialect and it took considerable compromise and ‘revisions’ to arrive at standardised grammatical rules and terminology. The province of Holland with its major cities of Amsterdam, Den Haag, Haarlem and Leiden played a predominant role – also on account of not only being the home ground of the financially powerful , but also being the seat of an active cultural life.

When the ‘Statenbijbel’ was finally made available around 1650 it became the most-read book in the Dutch language and it also found its way into the homes of the settlers at the Cape as an expression of written language. In many a family, the Bible was their only book that was readily available. The Dutch Reformed Church, which was based on Calvinist doctrine, was the official denomination at the Cape. In line with Protestant religious principles, any Christian believer could seek God’s grace without anyone as intermediary. The church and its institutions were controlled by the members of the parish and subject to their critical evaluation.

The 90 persons that Van Riebeeck brought with him to the Cape included a few women and children but mostly soldiers and petty officials of the VOC, mainly Dutch, with a few Germans. Their prejudices were those typical of their age with indigenous Blacks considered as inferior and ‘heathen’ in contrast to their own ‘Christian’ and ‘superior’ European identity. In these matters they were paralleled by the history of Virginia or the Carolinas, Jamaica or Trinidad and, later, Australia and New Zealand.

The first church congregation was established in 1665 in Cape Town, but the first church was completed only in 1704. The first school was opened in 1663, focussing on religious education, reading, writing and arithmetic. In the inland districts, churches were built in Stellenbosch, Drakenstein, Roodezand, Zwartland, Swellendam and Graaff-Reinet. In most instances the farmers joined forces to hire teachers for their farm schools. Church councils generally played a major role to improve teaching standards. In order to be ordained as a member of the church, one had to be able to read from the Bible. Hence, the underlying principles of teaching and learning were heavily imbued with the predominant Calvinist, Christian theology with its strong Bible-centric focus.

The sparse population spread over a vast area, combined with the scarcity of adequately trained teachers, made the provision of education services extremely difficult. In many instances literacy and numeracy were transmitted within family units. Farm schools offered basic reading, writing and arithmetic courses through the medium of Dutch (English teaching was only introduced early in the 19th century).

Various efforts were initiated by both settler communities and by Dutch church organisations in the Netherlands to maintain close religious ties. Church services were conducted in Dutch and hymns and psalms were sung in Dutch – a practice that was followed until deep into the 20th century when Afrikaans translations were made available. For many generations up to the Second World War, most Afrikaans-speaking scholars of theology preferred to study at Dutch universities such as Amsterdam, Leyden and Kampen. Afrikaans-language universities such as Stellenbosch, Pretoria, Bloemfontein and Potchefstroom tended to appoint Dutch-trained doctorandi as faculty members. It was only after the Second World War that more interaction with British and American universities became more common.

138

Land Tenure, Town Planning and Building Styles

The entrepôt at the Cape was not intended for colonial settlement. Nevertheless, the cultural landscape of VOC Cape Town was soon covered with an assortment of places, spaces and structures where people lived, worked and entertained themselves. (See Antonia Malan, “The Cultural Landscape” in Nigel Warden, ed. op. cit., pp.1-25)

According to Antonia Malan the conceptual model for late 17th century European town and country planning was formality and symmetry. This pattern was repeated throughout the VOC world with minor adjustments to suit local situations such as topography, location of fresh water and raw materials for building. As described by Antonia Malan, the peculiar traits of the Cape development pattern were practical structures such as a fort, a garden, a hospital, a church, VOC stores and workshops and a cluster of simple dwellings. The public buildings were unremarkable. The main axial street was named ‘Heerengracht’. The original fort was replaced in 1699 with the Castle of Good Hope and areas were set aside as a market for fresh produce (Groente Markt), an outspan for livestock (Boeren Plijn) and a seat for local administration (Burgher Wachthuis). The grid system was used for street layout and for allocating land for dwellings and commercial use based on a system of rectangular blocks and division into lots. Each block had its houses arranged around the outside and facing the street with annexes, outbuildings and backyards tucked behind.

For privately owned farms (plaatsen) and market gardens (tuinland) in Table Valley, the pattern of land allocation was different from house erven as it was premised on ensuring access to water and needing to adapt to the topographical ridges and valleys. The early land grants were of varied extent and asymmetrical shapes. Optimal use was made of the sloping terrain for irrigation and water supplies. Allowance had to be made for reservations (doordrift) between private properties to permit public access to resources such as washing places, grazing and fuel supplies. Gradually new land grants were made and new species of trees such as oaks, pines and poplars were introduced.

Central to the process of transformation of pre-colonial hunter-gathering and pastoral grazing was a system of allocating, granting and registering land in the ownership of individuals. A cadastral system was introduced to register and record property ownership. Three types of land grants were introduced: a lot (erf), garden land (tuinland) and farm (plaats). Land tenure was in the form of freehold grants and transfers – often with special conditions or servitudes stipulated in the registered deeds. Urban town lots were laid out in a chequerboard pattern whereas garden and farm land was more organically fitted to the terrain and water points.

The task of official land surveyors and chart-makers was to measure out pieces of land, demarcate their boundaries, make a diagram of each land parcel, measure and define its extent and give it a numerical designation. Each parcel of land was systematically recorded – including its ownership and the price paid. Little is known about how and why people originally came to be allocated land or how properties were evaluated. Some people owned more than one property or were involved in several property transactions.

Initially building materials at the Cape were limited to stone, shell lime, clay and reeds. Indigenous timber suitable to timber framing was scarce. There was little fuel available to bake bricks or clay tiles. The distribution of wood and thatching reed was controlled. Skilled builders were scarce and single- storey structures predominated. The first substantial dwelling homes reproduced European and Eurasian styles characterised by a ‘groot kamer’ layout associated with a big multi-purpose living

139 room with a hearth (groot kombuis) behind an asymmetrical facade. Wealthy locals and officials preferred the style of the ‘double house’ or ‘heerehuis’ as exemplified by Van der Stel’s Groot Constantia or Vergelegen and the garden property Leeuwenhof.

According to Antonia Malan the single-storey, thatched and gabled ‘Cape’ style that evolved by the middle of the 18th century was the result of a combination of factors: climatic conditions, limited availability of building materials, a desire for symmetry and the social organisation of domestic households. The end result was a symmetrical layout of rooms with a central entrance room (voorhuis) and a large inner room (galdery) at the core of the house. The style became common from the 1760s. The climate had a part in dictating how the layout of dwelling houses became adapted to local conditions. Hot and windy summers and cold, windy and wet winters required effective controlled ventilation and substantial wind- en waterproofing. Further elaborations of the Cape Dutch styles on rural estates in the south-western Cape after the 1750s were linked to increased social stratification and the emergence of a rural gentry. Town houses diverged from their rural counterparts into a plan more suitable for urban spaces: compact double or triple-storey structures which were more fire-resistant with flat, plastered and tarred or red-tiled roofs.

The Development of the Afrikaans Language

A universal feature of language is the development of regional accents, idioms and dialects. Of the around 12 Dutch dialects, the VOC promoted ‘Hollands’, the dialect of the province around Amsterdam where the company’s headquarters were located. The Dutch officials and original settlers that came with Van Riebeeck mostly came from the province of Holland, but others followed from other dialect areas. In time, many immigrants arrived from Germany, France, Scandinavia and Britain. Many slaves were brought from Malaya, the Indonesian islands, India and Madagascar. Finally, indigenous groups also became part of the language community: Khoikhoi (Hottentot), San (Bushmen) and Bantu (Xhosa, Zulu, Sotho, etc.). All members of the language community played a part in the development of a new lingua franca.

It is important to realise that each other-language speaker was expected to understand and communicate in the official language of the Cape which was Hollands. Inevitably their best efforts resulted in some form of ‘broken Hollands’ or a simplified, creolised, restructured form of Dutch which dropped certain inflections and vocabulary items, modified vowel sounds and incorporated loan words from other languages. This dialect – sometimes referred to as a form of ‘pidgin’ Dutch - which originated as a medium of oral communication between burghers and slaves and indigenous employees, in time would become a distinct language – Afrikaans, with its own rich vocabulary, structured grammar and extensive literature.

It is important to realise that each other-language speaker who was trying to speak Dutch (whether German, French, Malayan, Indian or other) brought along the baggage of their own accents, idioms, terminology and expressions. They all contributed to the restructuring process. This process is well described by E.H. Raidt’s remarkable study Afrikaans en Sy Europese Verlede, Nasau Beperk, 1991, p.231: “Dit spreek byna vanself dat – in ‘n spraakgemeenskap soos dié aan die Kaap – taalvariasie ‘n prominente rol gespeel het, veral as ‘n mens in aanmerking neem dat Vroeë Afrikaans uitsluitlik ‘n praattaal was wat nog nie aan direkte normering onderworpe was nie. Taalvariasie op fonologiese, morfologiese en sintaktiese gebied, individuele en groepsvariëteite en variante was aan die orde van

140 die dag ... Die variante is die gevolg van sowel oorgeërfde Nederlandse dialekvorme as die gebrekkige Kaaps-Nederlands van slawe, Khoi-Khoin en immigrante ...”

The restructuring process of the Dutch language continued over several generations, especially in rural areas and farms where the locals were less exposed to what was called ‘Hoog Hollands’. But because the Dutch Statenbijbel was widely read by burgers on the frontier, religion, combined with strenuous efforts of the church, played an important role to preserve the influence of Dutch.

It should be clear that Afrikaans is an off-shoot of the Dutch language which emerged out of the Dutch spoken by other-language speakers. Similarly, the Dutch dialects that were brought to the Cape by the VOC, was itself an off-shoot of earlier Germanic languages. It is no surprise that some Dutch visitors to the Cape in 1750 commented on the ‘broken Dutch’ (verminkte Hollands) spoken by the locals. But even today, almost three centuries later, any educated Afrikaans-speaker can easily understand any piece of written ‘Algemeen Beschaafde Nederlands’. Understanding the various spoken Dutch dialects is a different matter! In South Africa today, Afrikaans is the mother tongue of around six million people and the second language of around another ten million people.

Roman-Dutch Law

Through much of West European history since the beginning of the Christian age, the refined legal concepts of Roman law were combined with local customary law. The process referred to as the ‘reception of Roman law’ reached the Netherlands in the 15th century. By the 18th century this process was formally codified by eminent legal scholars such as Grotius (1583-1645) and Voet and became formally known as Roman-Dutch law. This legal system and its terminology were applied by the Dutch in their colonies where it has survived to this day.

Today the South African legal system is a hybrid of three distinct legal traditions: a civil law system inherited from the Dutch, a common law system inherited from the British and a customary law system inherited from indigenous Africans. These traditions have a complex interrelationship with the English influence most apparent in procedural aspects and methods of adjudication and the Roman- Dutch influence most visible in its substantive private law.

(See du Bois, F. (ed.), Wille’s Principles of South African Law, 9th ed., Cape Town, Juta & Co., 2007; Pont, D., “Die Reg van die Afrikaner” in Pienaar, P de V. (ed.) Kultuurgeskiedenis van die Afrikaner, Nasionale Boekhandel, 1968)

The relationship between Roman-Dutch law and English law in the current South African legal system is more specifically described by R.W. Lee and D.V. Cowen in the Encyclopaedia Britannica in the following terms: “Constitutional law and administrative law have developed along English lines. The law of procedure and evidence is almost wholly English as is most law relating to business associates and such areas as patents, trademarks, copyright, insurance and maritime operations. On the other hand, criminal law is a combination of elements from Roman-Dutch and English common law sources. In the law of succession, the rules governing the making of wills are English, whereas the substantive law of testamentary and interstate succession is largely Roman-Dutch. The law of persons and the law of property are almost purely Roman-Dutch, and the principles of the law of contract and the law of delict are Roman-Dutch, only mildly influenced by common law.”

141

(See also Verloren van Themaat, “Die Republiek se Staatsreg en sy Historiese Grondslag” in Pienaar, P de V (ed.), Kultuurgeskiedenis van die Afrikaner, op.cit. pp.252-257.)

Government Institutions and Political Participation

Similar to the United Kingdom, the Netherlands is today governed by an elected parliament and a constitutional monarchy. This system gradually evolved since the 17th century, but was briefly interrupted by the Napoleonic occupation early in the 1790s which lasted until Napoleon’s defeat in 1815.

Governmental power was exercised in the Cape by the authorities appointed by the VOC. The governor and Council of Policy, consisting exclusively of senior officials, ruled the Cape, subject to instructions from the Council of Seventeen in Amsterdam and the governor-general in Batavia. The Court of Justice served as judiciary with officials (the majority) and burghers as members – all appointed by the governor. Church ministers were salaried officials and the Church Council was jointly appointed by the governor and the Council of Policy.

Corruption was the way of life in the . But competition and conflict arose between the successful settlers and the senior VOC officials (including the governors) who used their official powers to enrich themselves by possession of large blocks of the best arable land, cattle ranches, slaves and use of their official positions to control access to shipping and external markets. This conflict came to a head in 1705 when Governor Willem Adriaan van der Stel exploited a wine concession to his advantage. Sixty-three free burghers signed a petition denouncing the officials and forwarded it to Amsterdam. The officials responded by getting 240 signatures to a counter petition. Eventually the Lords Seventeen dismissed the governor and three other senior officials. It was the first recorded successful civic action in South Africa originated at grass-roots level.

The tensions between the corrupt officials and the burghers again came to a head in the 1770s. In 1774 an official named Hendrik Boers, who had little understanding of the aspirations of the burghers, became Fiscal (Chief Prosecutor). He insisted that the company had the right to recall recalcitrant burghers to its service and to send them wherever it chose. In the first few years of his regime, Boers re-enlisted no fewer than seventeen burghers and dispatched them overseas. After the banishment in 1778 of the eighteenth burgher, Carel Hendrik Buytendach, the burghers felt compelled to contest the use of banishment and what they termed ‘arbitrary despotism’. The main spokesmen for the Patriots were fairly wealthy men from in and around Cape Town. In 1778 leading businessmen at the Cape sent two delegations to the Netherlands appealing not only to the Lords Seventeen, but also to the States- General complaining against the VOC’s trading policies and demanding freedom to trade with foreign ships. More importantly they demanded effective political representation. The burghers asked for seven seats on the Council of Policy and for half the seats on the Court of Justice, instead of the government’s co-opting of burghers to serve on government bodies. They had to be ‘freely elected’ by outgoing burgher members. In addition, they sought a clear definition of burgher rights, the codification of laws and the prohibition of banishment of burghers. The Cape Patriots also complained about the trading activities of officials and the lack of free trade and asked for better prices and the reduction of farm rents.

The historical significance of the Cape Patriots’ petition was the expression of a civic concern for effective political representation, codified laws and an end to the arbitrary exercise of government power. The quarrel became embroiled in the ideological and political struggle in the Netherlands at

142 the time. There supporters of the status quo were confronted by the Dutch ‘Patriots’ – a group of citizens influenced by the American Revolution which started in 1776, and also by the democratic ideas of the Enlightenment.

Unfortunately the petitions and deputations of the Patriots did not achieve much. More burgher representation on the Council of Policy was rejected, but they were granted three additional seats on the Council of Justice. Banishment of burghers was disallowed and burghers were allowed to trade with foreign ships – only after the VOC’s needs had been met. After this brief flourish of activity, the Patriots’ activities in the Western Cape soon fizzled out.

By this time events had taken their own course on the eastern frontier of the colony. The ‘trekboere’ (trekboers - literally trekking or migrant farmers) were essentially pastoralists who had moved beyond the districts of Stellenbosch and Swellendam (1745) to the north-eastern limits of the district of Graaff-Reinet (1786). At each of these outposts (Stellenbosch, Swellendam and Graaff-Reinet), the VOC was represented by a ‘landdrost’ who was a salaried employee. The local administrations of landdrosts were very rudimentary with few officials. Consequently a landdrost was obliged to rely heavily upon the unpaid services of prominent local trekboers known as ‘heemraden’ and ‘veldkornetten’. In each district, six heemraden were appointed by the governor from lists prepared by the existing holders of those offices. Besides administering the affairs of the district, the landdrost and heemraden formed a court of justice with minor civil jurisdiction. In each subdivision of a district, a veldkornet, appointed by the ‘landdrost en heemraden’ was responsible for law and order. This meant that the most prominent trekboers had a major say in the conduct of the local administration.

Over time the trekboer families were spread thinly over a vast area. Leonard Thompson in A History of South Africa, Jonathan Ball, Cape Town, 2006, p.461, reports that in 1793, of the 13,830 burghers in the Cape Colony, only 3,100 were residing in the vast eastern district of Graaff-Reinet and 1,925 in Swellendam district. Stellenbosch had 4,640 burghers and Cape Town only 4,155.

Since it took the trekboers up to three months to travel from Graaff-Reinet to Cape Town by ox wagon (although perhaps three weeks on horseback), life at the frontier demanded self-reliance and determination. Survival in the frontier areas required and spawned people with distinct character traits: independent minded, pioneering, fearless, individualistic, strong-minded, secluded, self- confident, and self-contained. Harsh conditions required tough people. They drew their ideas of life largely from their familiarity with the Old Testament. They lived according to the Calvinist principle of rebellion against suppression. They expected and even demanded a voice in the administration of affairs and representation in decision-making authorities. They rejected an official strangle-hold on their freedom of movement and displayed a deep suspicion against government officials. In some remote frontier areas, the activities of local trekboers verged on lawlessness. There are several well- documented examples of unruly behaviour of uncultured ruffians and outlaw types. But, by and large, the frontier communities did not trek into the wilderness to live lawlessly. They continually requested the authorities to establish churches, schools and judicial offices.

For defence the trekboer farmers formed a co-operative institution, the commando. The VOC had initially used its own military personnel in its operations, but from 1715 onwards, commandos consisted exclusively of civilians. They were dependent on the VOC for their guns and ammunition and, in theory, subject to company control. In practice they operated independently against the San and the Khoikhoi. When indigenous bands attacked burgher property during the 1770s over a wide area north

143 of Graaff-Reinet, large commandos retaliated and killed hundreds of the Bushmen in various campaigns between 1774 and 1795.

Leonard Thompson made the following controversial observation in his History of South Africa, op.cit. p.49: “Commandos exterminated adult hunter-gatherers but made a point of capturing children, and before they disbanded they distributed the children as well as the cattle booty among themselves.” Thompson further reports that by the 1770s the trekboers on the eastern frontier zone started to collide with southwards bound Bantu-speaking Xhosa pastoralists. In 1779 and 1793 major spells of warfare occurred. People were killed on both sides of the conflict, property was destroyed, sheep and cattle changed hands, but the conflict remained unresolved. Since these events strained the relationship between the frontiersmen and the colonial government which was not seen to be offering sufficient protection, prominent trekboers from the district of Graaff-Reinet drove out the landdrost in 1795 and assumed control. The Cape Town government responded by cutting off ammunition supply leaving them exposed to attacks by indigenous tribes.

At this time the Dutch control of the Cape was drawing to a close. Not only was the VOC facing bankruptcy, but the political regime of the Dutch Stadhouder, the Prince of Orange (supported by the Orangist party), was also facing total collapse. In the Netherlands the Patriot movement was crushed in 1787 with the aid of Prussian troops. Large numbers of Dutch Patriots emigrated to other countries, particularly to the newly independent USA. The French Revolution released new tensions in the Netherlands during the early 1790s and in 1795 the French cavalry entered Amsterdam and Dutch revolutionaries proclaimed the Batavian Republic. The Orangist party was routed and the Stadhouder fled to London. All the institutions of the old order were swept away - also the privileges of the nobility. Having fled to London, the Stadhouder, the Prince of Orange, had asked the British government to send an armed force to occupy the Cape and all other Dutch possessions abroad.

In the frontier districts of the Cape the burghers were already stirred by anti-royalist ideas coming out of the American War of Independence which started in 1776 and culminated in 1787. These sentiments were further bolstered by radical ideas coming out of France. There was widespread discontent amongst the burghers about the VOC’s control of trade and the heavy taxes levied. In the frontier areas they were incensed by the lack of protection against the constant harassment by the Bushmen and Xhosa tribesmen. They felt the local landdrost in Graaff-Reinet did not take their grievances seriously enough. These feelings led to the over- throw of the VOC rule in the district of Graaff-Reinet in February 1795, followed by the district of Swellendam in June 1795. Loyalties in the districts of Stellenbosch and Cape Town were divided: some favoured the royalist Orange Party, others the republican revolutionaries. Most people seemed to be confused by the intricate patterns of conflict emerging out of the French Revolution.

In September 1795 a British force occupied the Cape where they encountered a deeply divided white community. Most of the top officials were Orangists (anti-revolutionary and pro-Britain) and most burghers pro-France, pro-revolution and anti-Britain. The British force soon intimidated aspirant revolutionaries. By August 1796 the burgher communities of Swellendam and Graaff-Reinet decided to accept British authority and the rebels soon capitulated thereafter, in 1797. The Cape was returned to the Netherlands which, now a protectorate of France, was called the Batavian Republic. The Cape remained under Dutch control until 1806 when it was again conquered by Britain.

144

On January 10th, 1806, Governor Jansens signed the whole settlement over to the British. This date marks the beginning of British imperial rule in Southern Africa which lasted for more than a century. But it is important to realise that by this time a Dutch civic temperament that took almost a century and a half to morph into a distinct Afrikaner national identity was already afoot and which remained a prominent factor on the South African political scene for many generations afterwards.

The whole colonial population, descendants of Dutch, German and French Huguenot settlers, was relatively small – no more than around twenty-five thousand in all, scattered across a territory of 100,000 square kilometres. The population also included some 20,000 slaves imported from Africa and Asia as well as thousands of Khoikhoi, aboriginal pastoralists commonly called ‘Hottentots’. The total population of the Cape Colony at that stage was no more than about 75,000.

The British Influence

At the beginning of the 17th century, the British Isles had been unremarkable in many ways: economically, culturally, politically and strategically. Yet three hundred years later, Great Britain had acquired the largest empire the world had ever seen. It encompassed forty-three colonies in five continents. It held sway over around one quarter of the world’s land surface and roughly the same proportion of the world’s population – some 444 million people in all lived under some form of British rule.

British colonisation was a vast movement of peoples, unlike anything before or since. Some left the British Isles in pursuit of religious freedom, some in pursuit of political liberty, and some in pursuit of profit. Others had no choice, but went as ‘indentured labourers’ or as convicted criminals. Between the early 1600s and the 1950s, more than 20 million people left the British Isles to begin new lives across the seas. No other country came close to exporting so many of its inhabitants.

An important role was played by voluntary, non-governmental organisations such as evangelical religious sects and missionary societies. All contributed in paving the way for the expansion of British influence. The British came close to establishing the first ‘effective world government’. This was achieved with a relatively small bureaucracy roping in indigenous elites. The use of military force was a key element of British imperial expansion. The central role of the British navy was evident around the world: first in its pirate role and later also as transporter of soldiers to the far ends of the world.

In the course of empire building the British had robbed the Spaniards, copied the Dutch, beaten the French and plundered the Indians, transported many thousands of slaves to the Americas and then led the drive to abolish slavery. Britain led the ‘Scramble for Africa’ and had also been in the forefront of another ‘Scramble’ in the Far East (Malaya and chunks of Borneo and New Guinea) and a string of islands in the Pacific: Fiji, the Cook Islands, the New Hebrides, the Phoenix Islands, the Gilbert and Ellice Islands and the Solomons.

For many years this vast British Empire featured on maps of the world hung in schools all over the world, showing its territory coloured an eye-catching red. Not only were millions of people all over the world conditioned by the red-covered state of affairs, but even the British themselves began to assume that they had the God-given right to rule the world – as J.L Garvin put it in 1905 “… an extent and magnificence of dominion beyond the natural”. The extent of Britain’s Empire could be seen not only in the world’s atlases and censuses – Britain was also the world’s banker, investing immense sums

145 around the world. By 1914 the gross nominal value of Britain’s stock of capital invested abroad was £3.8 billion, between two-fifths and a half of all foreign-owned assets. (See Niall Ferguson, 2004, Empire – How Britain Made the Modern World, London: Penguin Books, pp.240-244)

For many generations, the British Empire relied heavily on the export of its people, capital and culture – particularly its language which has become the lingua franca of today’s world. Its influence is still carried along by the predominant socio-economic–political lifestyle of English-speaking countries. It is characterised by its reliance on free enterprise, private ownership, competitive markets, Scottish and English banking, comparatively limited government intervention, a legal system heavily laden with Common Law, representative parliamentary government, constitutional democracy, Protestant Christianity and a pragmatic philosophical orientation.

Winston Churchill gave a more lyrical expression to these sentiments: “What enterprise that an enlightened community may attempt is more noble and more profitable than the reclamation from barbarism of fertile regions and large populations? To give peace to warring tribes, to administer justice where all was violence, to strike the chains off the slave, to draw the richness from the soil, to plant the earliest seeds of commerce and learning, to increase in whole peoples their capacities for pleasure and diminish their chances of pain – what more beautiful ideal or more valuable reward can inspire human effort?” (Ferguson, op.cit., p/xxvii)

The Westminster System

The British system of government dates from the Middle Ages and was gradually transformed from monarchical absolutism, first to limited democracy and eventually to a fully-fledged constitutional democracy based on popular participation. This system became widely known as the ‘Westminster System’ which comprises the following: a hereditary monarch with ceremonial powers as head of state; a parliamentary system of executive power where a prime minister and his cabinet are responsible to a popularly elected parliament within a competitive party system; an independent judiciary, appointed by the head of state as advised by the cabinet; an electoral system based on popular franchise in single member constituencies; public responsibility and accountability by way of free elections at constitutionally based regular intervals; freedom of speech and of political association and activity by individual citizens; and, decision-making by majority vote.

The parliamentary system of executive power has been taken over by all countries that still maintain a constitutional monarchy. But several countries have introduced electoral systems based on some form of proportional representation (e.g. Australia). The unitary system of government has not shown itself to be a suitable answer to the problem of diversity. Also in countries with extensive land areas, such as Australia and Canada, the idea of a centralised unitary state was replaced with a decentralised federal system, such as in the USA. In the case of India, the problem of diversity was dealt with by way of partitioning the pre-independent Indian colony into independent states India and Pakistan. Ireland was also allowed, after a period of violent conflict, to become an independent country, with Northern Ireland remaining part of the United Kingdom. The Westminster system as such is based on many constitutional conventions which evolved over several centuries. Even in its classic form it is currently undergoing a gradual transformation to accommodate regional sentiments in Wales and growing national sentiments in Scotland.

146

South Africa imported the Westminster system when the Union of South Africa was formed in 1910 and it was maintained until 1961 when South Africa became a republic with a presidential system of executive power. However, traditional parliamentary procedures remain intact. The presidential system of executive power was retained in the 1996 constitution of the New South Africa. It was augmented with an elaborate articulation of civil rights, which, in the British tradition, is more generally safeguarded by constitutional conventions.

The British Growth-and-Development Model

Britain was the first industrial nation to come close to the model of a ‘growth-and-development’ society. It had the ability to transform itself and adapt to new things and ways of doing things. In particular, England had the precocity to increase the freedom and security of its people, to open its doors to migrants with knowledge and skills such as Dutch, Jewish and Huguenot refugees. Many newcomers were merchants, craftsmen, old hands of trade and finance and brought with them their network of religious and family connections.

The Industrial Revolution started in Britain, then changed the world and the relations of states to one another. The goals and tasks of political economy were transformed. The world was now divided between “... a front-runner and a highly diverse array of pursuers.” Britain became a commercial power of considerable potential and the principal target of emulation from the beginning of the 18th century. While Germany was still a collection of squabbling Germanic principalities and France was recovering from the turmoil of the French Revolution, the British Empire was streaking ahead. It took the quickest of the European ‘follower countries’ more than a century to catch up – and to surpass. (See Table below)

Estimates of Real GNP per Capita (Selected Countries in 1960 US Dollars)

1830 1860 1913 1929 1950 1970 Belgium 240 400 815 1020 1245 2385 Canada 280 405 1110 1220 1785 3005 Denmark 225 320 885 955 1320 2555 France 275 380 670 890 1055 2535 Germany 240 345 775 900 995 2750 Italy 240 280 455 525 600 1670 Japan 180 175 310 425 405 2130 Netherlands 270 410 740 980 1115 2385 Norway 225 325 615 845 1225 2405 Portugal 250 290 335 380 440 985 Russia 180 200 345 350 600 1640 Spain - 325 400 520 430 1400 Sweden 235 300 705 875 1640 2965 Switzerland 240 415 895 1150 1590 2785 UK 370 600 1070 1160 1400 2225 USA 240 550 1350 1775 2415 3605

(Based on figures provided by David Landes, The Wealth and Poverty of Nations, Little, Brown and Co., London, 1998, p.2322)

But there is another side of the coin. In the process of ‘making the modern world’, Great Britain exacted a heavy price from the millions of people overpowered by its imperialistic conquest.

147

The World Conference Against Racism, Racial Discrimination, Xenophobia and Related Intolerance, held in Durban, 2001, adopted a resolution which stated “... colonialism has led to racism, racial discrimination, xenophobia and related intolerance ... Africans and people of African descent and people of Asian descent and indigenous peoples were victims of colonialism and continue to be victims of its consequences”.

A recent audio-visual series written and presented by Jeremy Flaxman for the BBC entitled Empire, was introduced with the following commentary: “It was the Empire on which the sun never set, or as some said, on which the blood never dries. At its height Britain ruled over a quarter of the world’s population. Many convince themselves it was Britain’s destiny to do so. Much of the Empire was built on greed and a lust for power. But the British came to believe they had a moral mission too – a mission to civilise the world. These builders of the Empire were bold, they were adventurous, some were ruthless and some were a bit unhinged ... How did such a small country get such a big head?

The British Empire was not just about conquest, and government and chaps in shorts telling foreigners what to do. It was also about money and profit. It began with a few unscrupulous adventurers and it developed into a vast network that spanned the world ...

Off the coast of China, British traders made a fortune from ships freighted with addictive drugs and they helped themselves to the ancient riches of China. Money flowed to Britain from piracy in the Caribbean and from estates worked by slaves taken from Africa. Empire Trade and Empire Theft helped make Britain a capital of money which it still is today.”

Anglicisation

Britain first invaded the Dutch settlement at the Cape in 1795. At this time the distant outpost at the Cape of Good Hope became strategically important for Britain in terms of protecting her sea route to India – particularly in relation to the rise of Napoleon during the turmoil caused by the French Revolution. The Cape was subsequently returned to the Netherlands in 1803 and then re-captured in 1806. But they only took effective formal possession in 1814-1815 at the end of the Napoleonic wars. The second British occupation turned the remote, often neglected, refreshment station of the Dutch East India Company into a fully fledged British colony.

At the time the advance of Britain as a world power bred a conviction amongst Englishmen that what was good for Britain was good for the world. W.W. Bird, Colonial Secretary, wrote in 1822 “… nothing can be right or proper that is not English and to which he is unaccustomed”.

In the Cape the British invasion was resented by the local European-descendant Dutch population. Descendants of an amalgam of primarily Dutch, German and French settlers over a period of five generations since 1652, these white South Africans (later known as Boers), did not speak English and bore no allegiance to the British Crown or the aspirations of the British people. They were highly independent-minded republicans with a strong cultural partiality towards continental Europe. The bulk being established farmers, many were cattle graziers in outlying areas.

The Dutch-speaking colonists were soon confronted with a comprehensive Anglicisation policy. It changed the official language to English in the courts, government offices, postal services, official

148 communications and schools. It introduced the London Missionary Society and versions of British Protestantism. The English authorities even imported Calvinist Scottish clergymen to inculcate pro- British sentiments.

In 1827 the traditional courts of ‘landdrosts’ and ‘heemraden’, and the judicial powers of the field cornets were abolished. Local democracy, as had existed before, suddenly disappeared and was replaced by appointed British officials. The law, and its enforcement, was kept entirely in the hands of what was considered by the burghers as alien authorities who legislated in English.

The most important component of anglicisation involved the immigration of thousands of British settlers to strengthen the British component of the population. The first wave of around 4,000 British settlers arrived in the 1820s, which were located in the border areas of the Eastern Cape. The settlers included parties from England, Scotland and Wales. Most were farmers and artisans – and a sprinkling of professionals – mostly soldiers, teachers and clergy. They were a literate community and they played an important role in the struggle for press freedom, the development of independent thought and pressure for self-government in the Cape Colony. Existing towns in the Eastern Cape were penetrated by the English-speaking settlers, many of whom set themselves up as shopkeepers, businessmen and professionals. Others became farmers in the border region, who, according to historian T.R.H. Davenport in South Africa a Modern History, Macmillan, Johannesburg, 1978, p.31, developed “... the physical and moral toughness and the harder race attitudes common to the inhabitants of turbulent frontier districts, yet helping to bridge the gap between colonial and African society, which warfare had tended to widen, and unobtrusively to weaken the bonds of African society through trade, missionary activities, and, increasingly, the employment of Africans as labourers within the settlement ... Though the settlers’ relationship with the Afrikaner frontiersmen remained harmonious, the 1820 settlers retained a self-awareness which the Huguenots had largely lost. This they were able to do because the Somerset regime sought to replace Dutch by English in all spheres of public life ... British policy was to adjust the cultural life of colonial society to the legal realities of British rule. English, stated the proclamation of 1822, was to be exclusively used in the courts after 5 years. Special incentives were given to teachers who taught through English. In due course, English would also become the sole language of the legislature... British institutions made inroads into the commercial as well as the political and cultural life of the colony. ” Davenport also reports that Afrikaans propaganda caricaturised the English culture as that of “soakers, robbers and reds” – meaning canteen-keepers, shopkeepers and red-coats.

In time additional waves of British immigrants arrived and settled in the province of Natal (the Byrne Scheme of 1849-50 brought some 5,000 English and Scots), around Kimberley after the discovery of diamonds and in Johannesburg following the discovery of gold in 1884. Approximately 400,000 White immigrants entered South Africa between 1870 and 1900, which represented a larger number than the entire 1870 White population of 340,000. They filled most of the available openings in the upper ranks of the labour market. (See , Giliomee, The Afrikaners – Biography of a People, op. cit., p. 185)

After the Anglo-Boer War, Lord Milner, determined to convert the Transvaal into a “thoroughly British” domain where “British interests, British ideas, British education” would prevail, introduced a large-scale immigration of people of British descent. By 1910, the South African population stood at 5,878,000 with 3,956,000 Africans; 1,257,000 Whites of whom about 700,000 were Afrikaners and 557,000 English; 517,000 Coloureds and 148,000 Asians.

149

The other principal method Milner’s government intended to use in his Anglicisation campaign was to make English the chief medium of instruction in state schools. He ruled that “Dutch should only be used to teach English and English to teach everything else”. English was duly established as the medium of instruction.

In retrospect the Anglicisation policies that came to an end with the formation of the Union of South Africa in 1910, resulted in establishing the English culture in a predominant position in South Africa. But the triumph of British imperialism came at a huge cost to the lebensraum of the offshoot of the Dutch culture, Afrikaans. It divided the Afrikaans- and English-speakers into two hostile camps and a huge destruction of life and property during the Anglo-Boer War.

The Emancipation of Slaves

It is an irony of history that the same navy that transported millions of slaves to the Americas was also deployed to abolish the slave trade. It was also instrumental in expanding the narcotics trade. Between 1662 and 1807 nearly 3.5 million Africans came to the New World as slaves transported in British ships. That was over three times the number of white migrants in the same period. By 1700, Liverpool was sending 33 shipments a year on the triangular trip from England to West Africa to the Caribbean. John Newton, the composer of the song Amazing Grace, was a captain of a slave ship. In 1840, James Thompson composed his famous song Rule Britannia with its stirring words “Britons never, never shall be slaves”.

By 1770, Britain’s Atlantic empire seemed to have found a natural equilibrium. The triangular trade between Britain, West Africa and the Caribbean kept the plantations supplied with slave labour. The American colonies kept them supplied with victuals. Sugar and tobacco flowed back to Britain, a substantial proportion for re-export to the Continent. The profits from these New World commodities oiled the wheels of the Empire’s move to a new frontier – the Asian commerce.

Anti-slavery sentiments amongst colonists in the USA were first openly expressed as early as the 1680s. The Quakers of Pennsylvania were speaking out against it, arguing that it violated the biblical injunction of Matthew 7:12 “… do unto others as you would have others do unto you”. But it was only in the 1740s and 1750s that the ‘Great Awakening’ in America spread such scruples into wider Protestant circles. By the 1780s the campaign against slavery gained enough momentum to sway legislators. Slavery was abolished in Pennsylvania in 1780 – an example followed by a number of other northern states.

In Britain the slave trade was abolished in 1807. Henceforth, convicted slave-traders faced transportation to Britain’s penal colony, Australia. Once the slave trade was abolished, slavery itself could only wither, until in 1833, slavery itself was made illegal in British territory. The slave owners of the Caribbean were compensated with the proceeds of a special government loan. It did not put an end to the trans-Atlantic slave trade or slavery in the Americas. It continued on a smaller scale in the southern United States, but also on a far larger scale in Brazil. All told, around 2 million more Africans crossed the Atlantic after the British ban, most of them to Latin America. However, the British did put in a lot of effort to disrupt this continuing traffic.

A British West African Squadron of 30 warships was sent to patrol the African coast from Freetown with bounties offered to naval officers for every slave they intercepted and liberated. In 1840 the Royal Navy intercepted no fewer than 425 slave ships off the West African coast. The British

150

Parliament made slave trade illegal throughout the empire in 1807, shortly after the re-occupation of the Cape. In 1816 a slave registry was introduced and minimum standards for food, clothing, hours of work and maximum punishments was laid down in 1823, and the compulsory recording and limitation of punishments in 1826. Slaves were set free throughout the British empire under a law of December 1833. This Act allowed for a period of four years’ apprenticeship for domestic slaves before they were free to leave their masters’ service. The Act had been drawn up with West Indian conditions chiefly in mind, and Cape owners felt themselves not only under-compensated for the value of their slaves, but unable to draw the compensation money which was payable only in London. Emancipation took effect in 1838-40.

Racial Differentiation, Segregation and Territorial Partition

Under the Dutch East India Company and under British colonial rule some degree of racial differentiation and segregation was practised in various aspects of societal life. Simultaneously, the forces of integration also operated in many fields without much calculated planning or prohibition. On the social level there had been a fair degree of mixing between the early single male colonists, sailors and soldiers and particularly slave women. Many well-known Afrikaans families have, according to genealogical research, a slave component of at least 7 percent in their ancestry. Today there are in total more than 5 million ‘Coloureds’ (people of mixed descent) with Afrikaans or English family names in South Africa.

During the Dutch colonial period, the conflict between colonists and indigenous groups was largely limited to Khoikhoi and Khoisan groups. To some degree runaway slaves penetrated the Khoikhoi tribes to form semi-nomadic pastoralists, know as ‘Korana’ in the Northern Cape along the banks of the Orange River. Korana society was cattle-based and loosely organised. Access to pasture and hunting grounds were critical to their existence. Cattle-raiding and thieving amongst themselves was common, as well as from any other groups who happened to be within their reach. (See Couzens, T. (2004), Battles of South Africa, Claremont: David Philip, pp.141-157)

During the later stages of Dutch rule, efforts were made to resolve boundary disputes between the Boer frontiersmen and the Xhosa by the Dutch governor of the Cape, Joachim van Plettenberg, in 1780. But his efforts were unsuccessful. Many farmsteads were burned down by the Xhosa and cattle driven away. A commando system was devised by the Boer frontiersmen to retrieve their cattle and a ‘laager’ technique was developed to protect families against enemy forces. It involved drawing wagons in a circle with thorn bushes thrust between the openings. After the Dutch East India Company collapsed in 1798 under a burden of debt, it was only a matter of time for the colony to be captured by the British.

An important geopolitical outcome of British Imperialism is the drawing of territorial frontier lines between various ethnic communities: often rivers, mountain ranges or watershed escarpments. As the colonists moved eastwards and the Xhosa tribes migrated southwards, cross-border conflicts were inevitable. The British colonial powers instituted a ‘track and reprisal’ system to deal with cross- border cattle theft. Continued cattle raids led to a series of frontier wars with the Xhosa.

When the British annexed Natal in 1842 frontier lines were once again drawn between English colonists and the Pondos in the south and the Zulus in the north-east – creating the system of ‘native reserves’ which ultimately became known as ‘homelands’. By 1886 most of the Bantu peoples had been conquered by the British along the eastern seaboard and by the Boer Republics of the Orange Free State and the Transvaal on the central plains.

151

But the British had a cunning trick up their sleeves. The Colonial government preferred to pretend that their northern border did not exist and kept on annexing inland areas inhabited by the Voortrekkers (Boer pioneers). After the Voortrekkers left the Cape Colony in the 1830s, they established their inland republics in 1840 (Natalia), 1852 (Transvaal) and 1854 (Orange Free State). To cut off any possibility for the Boers to establish any alliances with foreign powers, Natal was annexed by the British in 1842.

Following the discovery of diamonds near the Orange-Vaal confluence in the late 1860s and of gold north of the Limpopo, the British Colonial Office in London developed a strategy to incorporate the Boer Republics in a federal structure. Shepstone was sent to annex the Transvaal in January 1877. The Boer commandos were called up and defeated the British army under Genl. Colley at Majuba to regain the ZAR’s independence in 1880.

The British invented a constitutional smokescreen called ‘suzerainty’ to formally recognise the Boer Republic’s independence, subject to reserving ‘certain portions of sovereignty’ to the British Crown. Gladstone defined these in the Commons as “those which relate to the relations between the Transvaal community and foreign countries”. (See Davenport, op.cit., p.132)

The major European powers were invited by German Chancellor Bismarck to a conference in Berlin (December 1884 to April 1885) to reach consensus on the rules applicable to future expansion of colonial occupation in Africa. What was sought was a cheap way of acquiring African empires, with frontiers that would command mutual recognition, even if occupation was minimal. The instruments to be used were ‘treaties of protection’ with African rulers and ‘charters’ given to specific commercial enterprises.

During the decades that followed the Berlin conference, the partition of African territories into European spheres of influence proceeded unabated. The Western bulge of Africa became France’s sphere of influence. In the south and on the eastern side, Britain ruled. Portugal, Germany, Belgium and Italy also had their portions recognised. (See Roland Oliver (2000), op.cit., pp.199-205)

Within this framework, Cecil Rhodes could proceed with his Kimberley diamonds and Johannesburg gold projects as well as with the British South Africa Company acquisitions north of the Limpopo. When Rhodes became elected Prime Minister in the Cape Colony in 1890, he also added the portfolio of Minister of Native Affairs to his duties. In this capacity he formulated what he called “a Native Bill for Africa”. This involved a programme to divide communal land into four-morgen farms owned on an individual basis, forcing younger sons in each family to seek work. Whites would be barred from buying land in Native areas and natives should not be mixed up with whites. He envisaged that this system would be extended to all areas in the colony and be used as a template for other states in Southern Africa. (See Meredith, M., 2007, Diamonds, Gold and War – The Making of South Africa, Johannesburg: Jonathan Ball Publishers, pp.259-269)

Economic Policies

152

The discovery of diamonds in 1866 and gold in 1885 provided the first major impetus for South Africa’s economic development. The diamond and gold fields were situated in the undeveloped interior and required heavy capital investment, heavy equipment for deep-level mining, transport infrastructure, large forces of organised labour and adequate management skills to extract, process and market the mineral finds. These factors were responsible for the establishment of a rail system, the opening of coal fields for the generation of electricity, the establishment of urban concentrations, commercial farming and manufacturing interests in the interior. Although the relative importance of the mining sector declined during the 20th century, gold remained the single most important product in the South African economy. Even today, South Africa produces around 40 percent of the world’s gold output and gold remains South Africa’s most important earner of foreign exchange.

During the second half of the 19th century, Cecil John Rhodes acted as the chief impresario of economic development in Southern Africa. There were many other players such as his partner Charles Rudd; his financiers in London, Rothschild Bank, pulling the strings behind the scene; the other Jewish magnates, Barney Barnato, Jules Porges, Sammy Marks, Alfred Beit, Isaac Lewis; the Randlords, J.B. Robinson, Julius Wernher, Adolf Goerz, George Albu, Sigismund Neumann, Lionel Phillips and Percy Fitzpatrick. Some of these magnates were subsequently organised into mining and investment clubs and companies such as Kimberley Central, De Beers Consolidated Mines, Rand Mines, Phoenix Diamond Mining, Consolidated Gold Fields, British South Africa Company, Randfontein Estates Gold Mining, Johannesburg Consolidated Investment Company. Behind the scenes Rhodes orchestrated the imperial political accompaniment. He believed what was good for his business interests was good for the British Empire and vice versa.

Large-scale deep-level mining brought major changes to the organisation of the labour force. Skilled and experienced miners were recruited from the coal mines of Cumberland and the tin mines of Cornwall, shaft sinkers came from Lancashire, artisans from the factories of Scotland and England. The foundations were laid for the trade union movement and soon the ‘Uitlander’ (foreign) franchise became a major bone of political contention. Large numbers of Black people were drawn to the mining centres around Kimberley and Johannesburg. Regulations under the Mining Act of 1883 determined that all ‘native’ work in any mine, whether in open or underground workings, shall be carried out “under the supervision of a European .. as his master or ‘baas’”. (See Meredith, op.cit., p.156). Black recruits were required, from 1885, to agree to six- to twelve-month contracts and to be housed in fenced and guarded compounds in order to give mine-owners greater control of their labour force. By 1889 over 10,000 Black mine workers in Kimberley were accommodated in segregated, closed compounds.

The Missionary Societies

The missionary societies were, in effect, aid agencies bringing both spiritual and material assistance to the less developed world. The Society for the promotion of the Christian Gospel (1698) and the Society for the Propagation of the Gospel (1701) were initially exclusively concerned with the spiritual welfare of British colonists and servicemen posted overseas. By the late 18th century the movement changed its focus to convert indigenous peoples to Christianity.

The London Missionary Society was formed in 1795. In 1799 followed the Anglican Church Missionary Society to “propagate the knowledge of the Gospel among the Heathen”. In the same period Scottish missionary societies were established in Glasgow and Edinburgh – all infused with a strong sense of philanthropy. Missionary stations were developed by the London and the Wesleyan Societies in the

153

Eastern Cape at Bethelsdorp, Port Elizabeth and at Kuruman in Bechuanaland. Dr. John Philip, the head of the London Missionary Society in South Africa and his son-in-law, John Fairbairn, editor of the Eastern Cape’s Commercial Advertiser, exerted strong influence on the formulation of colonial policy and public opinion in England. David Livingstone was sent to Kuruman which became the base of his exploration of the interior of Southern Africa below the Great Lakes.

David Livingstone was an important pathfinder for Britain’s advancement into the interior of Southern Africa below the Great Lakes. He was sent to Kuruman which became the base for his exploratory walks into the interior. He became famous as the first white man to reach the natural wonder on the Zambezi River which he named the Victoria Falls. In a famous lecture at Cambridge University’s House in 1857 he stated that commerce and Christianity should be inseparable as instruments of ‘civilisation’, rather than colonisation. He believed that the commercial development of Africa could coincide with its religious conversion. His personal aim was “to open a path” to the healthy highlands of Central Africa. After Livingstone secured government backing, he embarked upon an expedition up the Zambezi River in order to demonstrate its navigability and suitability for commercial traffic – hoping that it might result in an English colony in the healthy highlands of Central Africa. But reality prevailed as it soon became apparent that the river was not navigable. With that the project to penetrate Africa with commerce, civilisation and Christianity was sunk. Altogether Livingstone spent twenty years criss-crossing the region between the Zambezi, Lake Malawi and the upper Congo. He joined the battle against slavery and kept on trudging through the African jungle in search of the true source of the Nile. He died of malaria at Ilala in May 1873, a Victorian man of iron who did not know how to give up.

Throughout Africa, the missionary societies played a significant role in paving the way for British enculturation. In Southern Africa today, close to 60 percent of black Africans claim to associate themselves with the Christian faith. As stated by Niall Ferguson: “Commerce, Civilisation and Christianity were to be conferred on Africa, just as Livingstone had intended. But they would arrive in conjunction with a fourth ‘C’ – conquest.” (Ferguson, op.cit., p.162)

The Subordination of Black Tribes

The subordination of the many black tribes to British colonial rule took many battles between the ‘red coats’ and the black ‘impis’ that lasted until the final years of the 19th century. The Xhosa and Zulu tribesmen, in particular, were not easy push-overs. Apart from localised conflicts with nomadic Khoikhoi tribes, a major confrontation was building up on the eastern frontier with a large southward migration of Bantu-speaking tribes. On the eastern seaboard were the Xhosa-tribes, forming the southern tip of the Nguni group, being pushed further south by the Nguni-mainstream, the Zulu-tribes occupying areas further north-east along the eastern seaboard. The central highveld regions along the spine of the country, were occupied by Sotho-speaking tribes who were driven westwards by the military powerful Zulu-tribes.

On the eastern border of the Cape Colony the Colonial Office tried in vain to implement a segregation policy (based on the drawing of boundary lines documented in treaties) between the colonists and the black tribesmen. Cattle raids and border conflicts continued. From 1812 to 1852 no less than eight fully fledged border wars were waged between British soldiers and the Xhosa. Eventually, the Xhosa territory called the Transkei was annexed and the Xhosas became British subjects. Sir George Grey, transferred from New Zealand, became Governor of the Cape in 1854 to implement a ‘civilising policy’, which meant westernisation and an end to the traditional ruling power of the chieftains. They

154 surrendered their power in exchange for a monthly salary. Other scattered tribes such as the Basutus and the Griquas were brought under treaty and placed in demarcated ‘reserves’.

The British annexation of Natal in 1843 caused the scattered Boer frontiersmen to trek once more across the Drakensberg mountains into the Orange Free State and the Transvaal. It was now left to the British forces to subordinate the Zulu empire. Under the influence of Theophilus Shepstone the Colonial Office set aside eight ‘locations’ (reserves) to settle thousands of refugees who re-entered Natal after being driven away during the Mfecana-era (the destabilisation of African communities in the period 1780s to 1830s as a result of, inter alia, internecine strife and the conquests of Shaka-Zulu). In 1864 a Natal Native Trust was established to guarantee black occupants’ rights of ownership inside reserve territory. Additional ‘mission reserves’ were created, controlled by missionary societies. (See David Walsh, The Roots of Segregation, Cape Town, Oxford University Press, 1971, pp.1-30; See also Herman Giliomee et.al., New History of South Africa, op.cit., pp.186-188)

It was decided that the Zulu military power constituted a danger to the federal strategy. Hence, in early 1879 a military campaign was launched against the Zulu. At Isandhlwana the British army under Lord Chelmsford was defeated by the Zulus (800 soldiers killed) and despite a heroic defence of a missionary station at Rorke’s Drift by a handful of soldiers, the campaign failed. Only after reinforcements were mobilised, the Zulus were defeated at Ulundi and the British pride restored. The Zulu King, Cetshwayo, was captured and exiled to Robben Island at the Cape. The Zulu kingdom was divided into thirteen territories under appointed salaried chiefs. One of them was John Dunn, an Englishman who was said to have had 100 Zulu wives. In 1887 Zululand was annexed to the Crown and in 1897 reintegrated into Natal Province.

The Open-ended Frontier of British Imperialism

The British historian, Roland Oliver, who also edited The Cambridge History of Africa, wrote in The African Experience (op.cit.), pp.204-205, that “If Bismarck had been the presiding genius of the first stage of partition, the British prime minister Lord Salisbury was certainly the key figure in the second. His command of the swiftly developing field of African geography was unrivalled ... By 1891 he had achieved a series of bilateral agreements with Germany, France, Portugal and Italy, in which the broad outlines of the ultimate partition were clearly visible. ... Most activity on the ground was in the form of treaty-making or concession-hunting expeditions, which to the Africans were probably indistinguishable from the caravans of explorers and missionaries which had been criss-crossing their territories since the middle of the century. The boundaries still to be decided were those in the far interior, where simple projection inland from coastal spheres no longer produced unambiguous solutions.”

Following the Berlin conference called by Bismarck in 1884 in order to restrain expansionism, the colonial powers sought to protect their commercial interests as competition for markets grew fiercer. They sought frontiers that would command mutual respect – even if occupation was minimal. Commercial enterprises sought monopoly positions by acquiring quasi-governmental powers as Chartered Companies. The partition of Africa seemed to proceed by the enlargement of coastal enclaves until their boundaries touched and then by the projection inland of spheres of interest based upon coastal enclaves. This process was forged by a series of bilateral agreements between neighbouring European claimants. (See Roland Oliver, “The Drawing of the Map” in The African Experience, London, Phoenix Press, 2000, pp.199-212)

155

The World of Finance and Military Power

During the ‘Scramble for Africa’ in the second half of the 19th century, the entire continent was brought under some form of European control. Roughly a third of it was British. According to Niall Ferguson, the key to the Empire’s phenomenal expansion in the late Victorian period was the combination of financial power and firepower. According to Niall Ferguson’s research most of the huge flows of money from Britain’s vast stock of overseas investments flowed to a tiny elite of, at most, a few hundred thousand people. At the apex of this elite was indeed the Rothschild Bank.

The discovery of diamonds near Kimberley in 1866 and gold near Johannesburg in 1885 introduced a new phase in South Africa’s history. It provided a major impetus for economic growth. The diamond and gold fields were situated in the undeveloped interior and required heavy capital investment: transport infrastructure and heavy equipment for deep-level mining. It also required large manpower resources as well as technical and management skills to extract, process and market the mineral finds. These factors were responsible for the rapid establishment of a rail system, the opening of coal fields for the generation of electricity, the establishment of urban concentrations, commercial farming and manufacturing interests in the interior. All of this happened within a time-span of around two decades. (See Martin Meredith, 2007, Diamonds, Gold and War – The Making of South Africa, Johannesburg: Jonathan Ball, pp.247-470)

The Maxim gun was an American invention, but Hiram Maxim always had his eye on the British market. He set up a workshop in London and invited the great and the good to demonstrate his new weapon. Lord Rothschild joined the Board of the Maxim Gun Company established in 1884 and his bank financed the merger of the Maxim Company with the Nordenfelt Guns and Ammunitions Company. The Maxim guns were operated by a crew of four and could fire 500 rounds per minute. A force equipped with just five of these lethal weapons could literally sweep a battlefield clear with its hail of .45 inch bullets. The Maxim guns were used in battle against the Matabele, in the Sudan by Kitchener’s expeditionary force at Omdurman on the banks of the Nile, against the Boers (1899-1902) and in the two World Wars of the 20th century. War correspondent, Churchill described the Maxim gun as “… that mechanical scattering of death which the polite nations of the earth have brought to such monstrous perfection”.

Cecil John Rhodes acted in many ways as the impresario of the drama of British Imperial Conquest in South Africa during the second half of the 19th century. Son of an English clergyman, Rhodes came to South Africa at the age of seventeen and proceeded to the Kimberley diamond fields where he soon associated with well-connected financiers and eventually established De Beers Consolidated Mines, the largest diamond miner in the world. He later also established a foothold in Johannesburg’s gold fields and with his ‘Randlord’ associates masterminded the onset of the Anglo-Boer War (1899-1902).

Cecil Rhodes was also the founder of the British South African Company, mainly financed by the Rothschild Banking group. He secured from the Matabele chief, Lobengula, a concession to develop the gold fields that Rhodes believed existed beyond the Limpopo River. As he wrote to Rothschild in London “... once we have his territory, the rest is easy.” Rhodes merged the Bechuanaland Company with the Central Search Association for Matabeleland to become the United Concessions Company in 1890 with Rothschild as major shareholder. When Lobengula realised that he had been hoodwinked into signing over much more than mineral rights, he resolved to take Rhodes on. Rhodes then responded by sending an invasion force, the Chartered Company’s Volunteers, numbering 700 men

156 armed with a new devastating secret weapon – the .45 inch Maxim gun that could fire 500 rounds a minute. The Rothschilds’ sole worry, according to Niall Ferguson’s research “... was that Rhodes was channelling money from the profitable De Beers Company into the altogether speculative British South Africa Company.” (Ferguson, op.cit., p.225)

Niall Ferguson quotes a grotesque hymn which Rhodes’s men brazenly adapted as their anthem: “Onward Chartered Soldiers, on to the heathen lands, Prayer books in your pockets, rifles in your hands. Take the glorious tidings where trade can be done, Spread the peaceful gospel – with a Maxim gun.” (Niall Ferguson, op.cit., p.226)

The British South Africa Company was given a royal Charter to occupy and develop the region north of the upper Limpopo. Rhodes was well informed about the mineral resources of the Zimbabwe plateau and also about the copper of Katanga. He sent out his concession hunters and they met those of the Belgian King Leopold more or less on the line of the Congo-Zambezi watershed in 1891. In the wake of these endeavours, Southern and Northern ‘Rhodesia’ came into being. Nyasaland (later Malawi) was a shaving from the north-eastern corner of Rhodes’s empire, which Lord Salisbury took under direct imperial control because British missionaries, already established there in David Livingstone’s footsteps, made plain their antipathy to Chartered Company rule.

In the South African diamond and gold fields, Cecil Rhodes also acted as a front man for the Rothschilds. By 1899 the Rothschilds’ stake in De Beers was twice that held by Rhodes. When Rhodes marched into Matabele Land in 1893 with the British South Africa Company’s private invasion force of 700 men, well supplied with Maxim guns, Rothschilds provided the financial backing. In 1895 the Jameson Raid on the Transvaal Republic was masterminded by Rhodes (at the time Prime Minister of the Cape Colony) with full backing of his partners in London, as well as full support from Chamberlain, the Secretary of State for Colonies in Gladstone’s cabinet. The only condition set by Chamberlain was reassurance that they were “working for the British Flag”.

The conspirators would be armed with rifles and Maxim guns, purchased in Britain, ostensibly ‘for Rhodesia’, shipped to the Cape, transferred to De Beers premises in Kimberley and then to the conspirators. Once in control of Johannesburg, they would declare a provisional government and dispatch a force to seize the government’s arsenal in Pretoria. The British High Commissioner would then intervene. A new era would begin. (See Martin Meredith, op.cit, pp.311-353)

Jameson’s 100 ‘volunteers’ were recruited in Cape Town from the Duke of Edinburgh’s Volunteer Rifles, a Cape Town regiment. Jameson’s invasion force of 800 men, despite the force of their Maxim guns, were rounded up by Boer forces near Johannesburg. Jameson was arrested and magnanimously released by the Boers for trial in Britain – where he served 3 months in prison and was later knighted. Both Rhodes and Chamberlain kept their interaction discretely secret. The British Parliament set up a committee of inquiry into the Raid that was little more than a sham. Chamberlain himself sat on the committee. As summarised by Martin Meredith “… the Rhodes conspiracy ended as it began: in collusion, lies and deceit”.

Speaking in the House of Commons in May 1896, Chamberlain warned against the possibility of war:

157

“A war in South Africa would be one of the most serious wars that could possibly be waged … It would be a long war, a bitter war and a costly war … it would leave behind it the embers of a strife which I believe generations would hardly be long enough to extinguish … [it] would have been a course of action as immoral as it would have been unwise”. Yet, Chamberlain himself was to preside over just such a war within three years when he mobilised the British Empire’s force of 500,000 against 40,000 Boer farmers.

The critics of the war argued that not only was imperialism immoral, it was a rip-off paid for by British taxpayers, fought for by British soldiers, but benefitting only a tiny elite of fat-cat millionaires. Rhodes was depicted as an “Empire jerry-builder who has always been a mere vulgar promoter masquerading as a patriot, and the figure-head of a gang of astute Hebrew financiers with whom he divides the profits”. (Ferguson, op.cit., p.284)

Speaking in the House of Commons, David Lloyd George declared: “A war of annexation … against a proud people must be a war of extermination, and that is unfortunately what it seems we are now committing ourselves to – burning homesteads and turning women and children out of their homes … the savagery which must necessarily follow will stain the name of this country”. (Quoted by Ferguson, op.cit., p.283)

The Conquest of the Boer Republics

The thorniest problem for Imperial Britain in the Southern African region was posed by the independence of the two Boer republics, the Orange Free State and the Transvaal. The Transvaal, in particular, became the magnet for investment, immigration, migrant labour and railway development in the whole subcontinent. It was the centre of the world’s richest gold mining industry – no longer the home of 30,000 to 40,000 errant Boer farmers. The burghers of the two Boer republics were the descendants of the Dutch-speaking frontiersmen who decided in 1836-1840 to trek on horseback and in wagons beyond the borders of the Cape Colony into the interior. They were dissatisfied with the unrepresentative British colonial government and a growing sense of insecurity in the areas bordering on Xhosa territory. As they saw it, Britain failed to protect them against tribal onslaughts, or let them protect themselves. After 1836 many of them trekked northwards, beyond the Orange River. After military confrontations with the Ndebele and the Zulu and a series of negotiated settlements and treaties, the Boer pioneers established their own self-governing republics in Natal (1842), the Orange Free State (1854) and the Transvaal (1858).

The Boers’ determination to preserve their own way of life in independence clashed with Britain’s grand imperial ambitions. The British Colonial office invoked their policy of pretending that the northern border of the Empire did not exist and kept on annexing areas occupied by the Boer pioneers and claiming their allegiance as “subjects of the British Crown”. The Boer leaders tried in vain to reach negotiated settlements with the British Imperial Powers. By the middle 1870s the British Colonial Office in London developed a strategy to incorporate the Boer republics together with the Cape and Natal colonies into a federal structure. Shepstone was sent to annex the Transvaal in 1877, but after the Boer commandos were called up, they defeated the British army under General Colley at Majuba to regain the ZAR’s independence in 1880. This war is known today amongst Afrikers as ‘die Eerste Vryheidsoorlog’ – the First War of Independence.

The Boer republics now found themselves in the middle of several convergent forces. The diamond and gold fields contained some of the world’s largest deposits of these minerals. From the interior

158 came large flows of migrant black job-seekers. From around the world came a multitude of fortune- seekers – some were pick-and-shovel diggers, others well-connected financial tycoons. From its commanding heights, the British Imperial Government held the trump cards: financial resources and military power. The interaction of forces that played out over the next three decades coincided with the ‘Scramble for Africa’ by the major European powers. In Southern Africa, the United Kingdom gobbled up the lion’s share.

For the British government the Transvaal problem became exacerbated by the possibility of the Transvaal and Orange Free State Boers seeking protection from Germany – particularly after they built a railway line to the Mozambique coast to avoid dependence on the Cape system. In these circumstances the British deliberately provoked hostilities.

At the start of the Anglo-Boer War, the British garrisons in the Cape and Natal numbered little more than 15,000 men. By its end in 1902 half a million men had had to be brought in from several Commonwealth countries. At the end of the war the two Boer republics were completely destroyed, 30,000 farmsteads were burnt down, hundreds of thousands of livestock confiscated, women and children herded into concentration camps where an estimated 27,000 died of malnutrition and disease, around 31,000 Boer ‘combatants’ were held in camps as far afield as Bermuda and Ceylon and only around 17,000 hapless ‘bittereinders’ were forced to lay down their weapons.

For Rhodes, the justification for this destruction was unambiguous: “We are the first race in the world, and the more of the world we inhabit, the better it is for the human race.” He is said to have had a dream of an English Imperial Order along the lines of Jesuit Catholicism.

The person who was left after the war as High Commissioner in Cape Town with the task to continue the Anglicisation of South Africa, was Lord Milner. He is reported to have said, “My participation knows no geographical but only racial limits. I am an imperialist and not a Little Englander because I am a British race Patriot. It is not the soil of England ... which is essential to arouse my patriotism, but the speech, the traditions, the spiritual heritage, the principles, the aspirations, of the British race.” (See Ferguson, op.cit., p.251)

Among the war correspondents who also served as a camp doctor during the Anglo-Boer War, was the later famous creator of Sherlock Holmes, Arthur Conan Doyle. After the war he published a major account of the causes, scope and the course of events during the war under the title The Great Boer War. On the first page of the first chapter, he wrote the following: “Take a community of Dutchmen of the type of those who defended themselves for fifty years against all the power of Spain at a time when Spain was the greatest power in the world. Intermix with them a strain of those inflexible French Huguenots who gave up home and fortune and left their country for ever at the time of the revocation of the Edict of Nantes. The product must obviously be one of the most rugged, virile, unconquerable races ever seen upon earth. Take this formidable people and train them for seven generations in constant warfare against savage men and ferocious beasts, in circumstances under which no weakling could survive, place them so that they acquire exceptional skill with weapons and in horsemanship, give them a country which is eminently suited to the tactics of the huntsman, the marksman, and the rider. Then, finally, put a finer temper upon their military qualities by a dour fatalistic Old Testament religion and an ardent and consuming patriotism. Combine all these qualities and all these impulses in one individual, and you have the modern Boer – the most formidable antagonist who ever crossed the path of Imperial Britain. Our military history has largely consisted in our conflicts with France, but Napoleon and all his veterans have never treated us so

159 roughly as these hard-bitten farmers with their ancient theology and their inconveniently modern rifles.”

Historian Roland Oliver assessed the Anglo-Boer War with the following words: “The unforeseen price of victory was the forging of an Afrikaner nation strong enough and united enough to win the peace.” (op.cit., p.209). He nevertheless concludes that this war may justly be considered as the concluding act in the European partition of Africa. Many frontiers still remained obscure, but the remaining problems were mostly of a detailed nature. The political map of Africa had been drawn. It was only with the approach of post-colonial independence that African populations as a whole really had to face the full implications of national boundaries and broad based socio-economic development.

Bibliography – Dutch Period

Coertzen, P. (1988) Die Hugenote van Suid-Afrika 1688-1988, Tafelberg Uitgewers, Kaapstad Couzens, T. (2004) Battles of South Africa, David Philip Publishers, Claremont Davenport, T.R.H. (1977) South Africa – A Modern History, MacMillan, Johannesburg De Klerk, W.J. (1971) Afrikanerdenke, Pro Rege Pers, Potchefstroom De Villiers, J. (2012) “Die Nederlandse Era aan die Kaap” in Pretorius, F. (2012) Geskiedenis van Suid-Afrika, Tafelberg, Kaapstad Du Bois, F. (ed.) (2007) Wille’s Principles of South African Law, 9th Ed. Juta & Co, Cape Town Gey van Pittius, E.F.W. Staatsopvattings van die Voortrekkers en die Boere, (1941) J.L. van Schaik, Pretoria Giliomee, H (2003) The Afrikaners – Biography of a People, Tafelberg Publishers, Cape Town Giliomee, H. & Mbenga, B. New History of South Africa, Tafelberg Publishers, (2007) Cape Town Guest, R. (2004) The Shackled Continent, MacMillan, London Harrison, D. (1981) The White Tribe, MacMillan, London Heese, H.F. (2005) Groep Sonder Grense - 1652 – 1795, Protea Boekehuis, Pretoria Johnson, R.W. (2004) South Africa – The First Man, The Last Nation, Jonathan Ball Publishers, Johannesburg Meredith, M (2007) Diamonds, Gold and War – The Making of South Africa, Jonathan Ball Publishers, Jeppestown Millin, S.G. (1951) The People of South Africa, Constable & Co., London Pama, C. (1983) Die Groot Afrikaanse Familie-naamboek, Human & Rousseau, Kaapstad Pienaar, P. de V. (ed.) Kultuurgeskiedenis van die Afrikaner, Nasionale (1968) Boekhandel, Kaapstad Raidt, E.H. (1991) Afrikaans en sy Europese Verlede, Nasionale Opvoedkundige Uitgewery, Kaapstad Schoeman, K. (2002) Die Suidhoek van Afrika – Geskrifte oor Suid-Afrika uit die Nederlandse Tyd 1652-1806, Protea Boekehuis, Pretoria Theal, G.M. (1917) South Africa, G.P. Putnam’s & Sons, New York Worden, N. (ed.) (2012) Cape Town – Between East and West, Jacana Media, Auckland Park

Bibliography – British Period

160

Barthrop, M. (1987) The Anglo-Boer Wars 1815-1902, Blandford Press, Dorset Carver, M. (1999) The Boer War, Sidgwick & Jackson, London Davenport, T.R.H. (1977) South Africa – A Modern History, MacMillan, Johannesburg Doyle, Arthur Conan The Great Boer War, Scripta Africana Series, Johannesburg Ferguson, N. (2004) Empire – How Britain Made the Modern World, Penguin Books, London Giliomee, H. & Mbenga, B. New History of South Africa, Tafelberg Publishers, (2007) Cape Town Holt, E. (1958) The Boer War, Putnam, London Landes, D. (1998) The Wealth and Poverty of Nations, Little Brown & Co, London Meredith, M. (2007) Diamonds, Gold and War – The Making of South Africa, Jonathan Ball, Johannesburg Millin, S.G. (1951) The People of South Africa, Constable & Co, London Oliver, R. (2000) The African Experience, Phoenix Press, London Oliver, R. (ed.) The Cambridge History of Africa, Cambridge University Press Overy, R. (ed.) (2010) The Times Complete History of the World, Time Books, London Pakenham, T. (1993) The Boer War, Jonathan Ball Publishers, Johannesburg Pretorius, F. (ed.) (2001) Scorched Earth, Human & Rousseau, Kaapstad Pretorius, F. (ed.) (2012) Geskiedenis van Suid-Afrika, Tafelberg, Kaapstad Schoeman, K. (ed.) (1999) Witnesses to War – Personal Documents of the Anglo-Boer War, 1899- 1902, Human & Rousseau, Cape Town Spies, S.B. & Natrass, G. Jan Smuts – Memoirs of the Boer War, Jonathan Ball Publishers,(1999) Jeppestown

161

10. East Asian Modernisation and Confucian Capitalism (July 2010)

Confucianism as an ethical and philosophical system developed from the teachings of the ancient Chinese philosopher Confucius (the Latin name of K’ung-fu-tzu), who lived around 551-479BC. For long periods it was held as the official state ideology of the Han tribe who predominated in China for long periods. The disintegration of the Han domination in the second century AD, opened the way for the spiritual doctrines of Buddhism and Taoism to dominate the spiritual and intellectual life in China. Confucianism returned during the Tang dynasty in a reinvigorated form and was adopted as the basis of the imperial exams and as the core philosophy of the scholarly official class. In popular practice, the doctrines of Confucianism, Buddhism and Taoism are often melded together and have been embedded in the culture of East Asian societies over many generations.

Official Confucianism was only ended when the examination system was abolished in 1905. The New Culture intellectuals blamed Confucianism for China’s weaknesses in comparison with the Western powers. However, during the 20th century, Confucianism was revived and by the late 20th century Confucianism was credited by many observers with the rise of East Asian economies.

The core of Confucianism is its humanism and focus on the secular life. Confucianism is not about the gods or afterlife. It is about life in this world as families, as communities, as righteous and moral people and the norms and propriety that determines a good lifestyle. This stance rests on the belief that human beings can be taught, improved and perfected through personal and communal endeavour – especially through self-cultivation and self-creation. In Confucian thought, the cultivation of virtue and the maintenance of ethics depend particularly on three basic elements: yen, yi and li. Yen is an obligation of altruism and humaneness towards other members of the community. Yi is the upholding of righteousness and the moral disposition to do good. Li is a system of norms and propriety that determines how a person should properly act within a community. Confucianism holds that one should sacrifice your life, if necessary, for the sake of upholding the cardinal moral values.

Confucianism has influenced the cultures of several countries: China, Taiwan, Japan, Korea and Vietnam, as well as other territories settled by Chinese people. Although few people explicitly identify themselves as Confucian, its ideas and customs prevail in many cultural ways in these areas. Confucian ethics are often seen as a complementary guideline for other ideologies, beliefs and practices. It is remarkable that around 500BD Confucius proclaimed as a cardinal virtue the moral code that we still cherish as the “golden rule”: “... what you do not like yourself, do not do to others”. The wording changed somewhat, but the underlying principle is the same “do not do to others as you would not be done by”.

Western Missionary Work in East Asia

Western powers such as the Portuguese, Dutch, Spanish and British have been active in East Asia from the 16th to the 18th centuries as traders from their colonial outposts. All of them were also engaged in some form of cultural transfer. This involved trade and manufacturing skills, government and administration systems, language skills, education and Christian missionary work.

In 1572, a Jesuit, Francis Xavier, sailed in a Portuguese trading ship to Kagoshima in Japan where he established a small missionary station where the people converted by him totalled 150,000 when he died in 1557. But by 1630 Christianity was totally banned in Japan. So strong was the pressure for

162 seclusion that Japanese individuals, even sailors, were forbidden to travel abroad. The Jesuits also penetrated China using the port of Macao as their first missionary station. The Chinese Emperor was friendly to the Jesuits for they were skilled in mathematics and astrology.

Many regions of East Asia had long been closed to Christian missionaries when in 1858 the Treaty of Tientsin allowed them to enter the interior of China and a stream of North Americans arrived. In the 1870s Japan was again opened to European and American missionaries and they soon were permitted in Korea too, where they won more success than in any other part of the East Asian mainland. In Korea the missionaries were welcomed as allies against the invading Japanese.

China was for long the focus of intensive missionary work. Catholic schools alone had taught some five million Chinese students by 1949. Catholic hospitals and doctors served around 30 million people and Catholic orphanages alone numbered 1500. When Mao’s Communists took over, these schools and hospitals passed into government hands and churches were closed. Foreign priests, pastors, nuns and medical staff returned to their homelands. Many of those were Americans and Canadians.

The leader of the Nationalist Government, General Chiang Kai-shek and his wife were Methodists and they retreated with their followers to Taiwan. The Buddhists who had been active in China for centuries could still sound their temple gongs in the provincial towns, but only older people answered their calls.

After 1949, Christians were largely excluded from large areas of East Asia: China, North Korea and much of Indo-China. The YMCA for China continued to function, but its leader Y.T. Wu was soon imprisoned where he died 20 years later in 1979. The Red Guards closed all Christian churches and many Chinese priests and pastors were imprisoned. All foreign cultural influences were seen as toxic. A Handful of remaining Christians continued to assemble in houses privately. However, by 1980 some surviving Protestant congregations were again allowed to meet in the old church buildings that remained standing. In 2002 the Vatican estimated that around 8 million Chinese Catholics still worshipped “underground”. (See Geoffrey Blainey, op.cit. pp.531-532)

The period 1945-1954 was characterised by enormous upheaval in East Asia: the Communist takeover in China in 1949, the Korean War 1950-53, and the post-war reconstruction and development of Japan, Taiwan and South Korea. These momentous events brought about paradigmatic changes in the paths of development followed in the affected countries.

Japan’s Reconstruction

After the Second World War ended, in the West with Germany’s surrender in May 1945, Japan kept on fighting until September when Japan’s Emperor surrendered to the Americans in 1945 after nuclear bombs were dropped on Hiroshima and Nagasaki. Japan was devastated, humiliated by absolute defeat, its industrial plants and more than a third of the country in rubble and ashes. There was hardly anything to eat. The confrontation with American power had overwhelmed the Japanese and driven home the fact of superior American economic and technological prowess. The occupation that followed brought them fact-to-face with American standards. Japan had to accept a new constitution which ended the divinity of the Emperor and gave sovereignty to the people. An independent judiciary was established, free labour unions were permitted and war was renounced.

163

The first post-war years were hard, dominated by vast dislocations, chronic shortages and high inflation. The cost pressures of the American occupation and the emergence of the Cold War against the Communist Bloc forced the Americans to turn their focus on promoting the Japanese economic recovery. There were many elements in Japan’s post-war success. Part of that was the “Dodge Plan” which did much to extinguish inflation. The US occupation implemented land reform and broke up the zaibatsu, the large industrial financial combinations. The zaibatsu were succeeded by keiretsu, groupings of banks and industrial companies with links that were less tight than before and thus created more scope for new entrepreneurs like Akio Morita, the cofounder of Sony. The country had a large and educated workforce, low inflation and a very high savings rate. Japanese set out on a forced- pace campaign to obtain and absorb technology from America and Europe. Morita’s partner at Sony came across the transistor at Westinghouse while on a State Department sponsored tour and promptly acquired the rights. Japanese companies sought continuing quality improvement as a competitive weapon and invested in ever-greater scale in mass production in order to win market share. As described by Daniel Yergin and Joseph Stanislaw in The Commanding Heights – the Battle Between Government and the Market Place That is Remaking the Modern World, Simon & Schuster, New York, 1999, p.162, “All this was sustained on values that included an incredible work ethic, an extraordinarily intense identification with the firm, a shared sense of national identity (and of the country’s precarious position), a desire to live better – and the searing memory of the defeat, the harsh post-war years, with the occupation and the humiliation that went with it”.

Amongst other factors that played key roles was Japan’s commitment to exporting its way to growth. International trade won over inward-looking policies and Japan benefited enormously from the increasingly open international trading system that America took the lead in shaping. America did not see Japan as a competitor, but as a source of cheap, low-quality goods. Japan’s own protectionist policies were overlooked. As an exporter, Japan moved up the product chain: from textiles and simple manufactures to ships and steel to complex mechanical goods, electronics and high technology. (See Yergin & Stanislaw, op.cit. p.163)

The other major paradigmatic influence on the post-war events in East Asia was the thrust of Soviet Communism into the area during the final months of the Second World War. Stalin was kept informed about the progress of American nuclear technology by a few renegade Jewish informers. After the fall of Berlin, Stalin realised that the conquest of the Japanese Empire and the areas it occupied was imminent. Having already occupied much of Eastern Europe, Stalin realised that the countries lying on his eastern frontier could be pulled into his sphere of influence. At the Potsdam Conference the question of the Soviet Union’s entry into the war with Japan was mooted. In reality Soviet armies were already heavily engaged in driving Japan out of Manchuria, the northeast corner of China and strengthening its bonds with Mao Zedong’s forces in China. When Japan surrendered, the Communist forces already stood at the 38th parallel which later became the border between North and South Korea. For Korea, Japan’s defeat ended the harsh Japanese colonial regime imposed in 1910, but it also brought a division of their country: a Communist North and an anti-Communist South.

When the Korean peninsula was partitioned in 1945, South Korea had been left with very little. Most of the existing industry – largely Japanese built hydro-electric stations on the Yalu River and the nearby chemical and fertiliser plants – had ended up in North Korea. In June 1950, 135,000 North Korean troops invaded the South. Communist China (and Russia) entered the war in support of North Korea and for a while it seemed as if South Korea might not survive. Seoul, its capital, changed hands several times. The USA entered the war in support of the South, using Okinawa in Japan as the military base for General McArthur.

164

The Korean War, 1950-53, and the military build-up that went with it, provided a major stimulus to economic growth throughout the industrial world, but particularly to Japan as the supply base for the American forces. Japan rose from recovery to strong and sustained economic growth under Prime Minister Ikeda. Within 20 years Japan’s national income approached those of Western European countries. By the end of the 1980s, the capitalisation of the Tokyo Stock Exchange was equal to that of New York and of the world’s ten biggest banks, eight were Japanese. Its economy was controlled by what was called the “iron triangle”: bureaucrats, businessmen and politicians. At the centre of the jukyu chosei – the apparatus of economic management – was one agency that controlled both domestic and external strategy: the Ministry of International Trade and Industry (MITI). It channelled information and knowledge and facilitated the flow of new technologies and used an array of tools, price setting, quotas for imports and market share, licences, quality standards, administrative guidance, organised mergers and encouraged the specialisation of small and medium-sized companies. It also restricted foreign competition within Japan through a host of tools and barriers. MITI acted as the single coordinator. It was staffed by the top graduates from the top universities. A key feature of Japan’s growth was its flexibility in pursuing economic objectives. In the 1950s the focus was placed on heavy industry shipbuilding and iron and steel. In the 1960s it moved into high- technology consumer manufactures largely for export. From the 1970s Japan concentrated on technological innovation and higher value-added products while transferring the production of lower value-added goods overseas. Then began an era of massive Japanese investment in Asia, America and Europe. In the 1990s Japan had the world’s strongest economy with the largest per capita GNP and the largest holding of foreign assets and debt.

South Korea’s Emancipation

In 1953 the Korean War ended with a truce, not a peace treaty. Kim Il Sung, first of North Korea’s megalomaniac leaders, never wavered in his relentlessly hostile policy. South Korea was devastated by the war, with 7 percent of its population killed and two-thirds of its industrial capacity destroyed. President Syngman Rhee, a Ph.D. graduate from Princeton University under Professor Woodrow Wilson, remained in power to 1960. But Rhee’s forte was politics, not economics. After a military coup in 1961, General Park Chung Hee ran the country from 1962 to 1979. He adopted the Japanese growth model: highly interventionist with a strong export orientation. Korea promoted big companies – national champions called chaebols – which were holding companies that controlled diversified industrial conglomerates. Their names are globally known: Hyundai, Samsung, Lucky Goldstar, Kia and Daewoo.

It was only after 1979, under the influence of economic adviser Kim Jae-Ik, with a Ph.D. in economics from Stanford University, that South Korea dismantled its government interventionist apparatus, sold off most state-owned enterprises, liberated the financial sector, reduced import barriers, promoted foreign investment in the country, increased accountability for business leaders. After Kim’s death, South Korea continued to pursue policies aimed at less intrusive planning, an expanded role for the market and financial and import liberalisation. The powerful bureaucracies were reluctant to lose their power, but the South Korean leadership persisted with its free market policies and the country’s impressive growth continued. South Korea was unrivalled, even by Japan, in the speed with which it advanced from poverty levels to one of the world’s most industrialised, prosperous nations.

Along the way upwards, South Korea went through a stage of massive corruption, kickbacks, bribes and political payoffs. Many of its generals, managers and politicians grabbed too much profit for

165 themselves and several of the heads of chaebols were given prison terms. Two former presidents (Chin and Roh) were arrested: one imprisoned for 20 years and the other sentenced to death. It appears that even today, south Korea is still in the grips of powerful chaebols and political cliques, but its reforming and restructuring process is an ongoing challenge.

Taiwan’s Confucian Capitalism

In 1949 the forces of Nationalist Party leader Chiang Kai-shek was defeated by the forces of the Communist Party leader Mao Zedong. The Communists took control of mainland China and the Nationalists retreated to Taiwan where they relied on the protection of the American military umbrella based in Japan. Taiwan had been a colony of Japan and then briefly a province of China after World War II ended. It became a separately functioning country again in 1949 when Chiang Kai-shek sought refuge there with around 2 million soldiers and civilians.

Although heavily outnumbered by the Taiwanese Chinese by three to one, the refugees from the mainland took control of Taiwanese life. Survival of Taiwan became the paramount issue. The Chinese civil war was, in effect, still ongoing as Chiang and the Nationalists refused to acknowledge that Taiwan was not China and talked for many years about retaking the mainland. In time, Chiang’s resolve shifted to become an aspiration, then a myth, then a “liturgy”. Initially its challenge was to withstand an onslaught from the mainland, then later to weather Taiwan’s isolation as the People’s Republic of China took its place in the international community. It faced an almost constant struggle for legitimacy in the international system. It had to strengthen its own national unity on the island and build the economic infrastructure required for survival. Things did not look promising in the late 1940s and early 1950s. The country had few resources, few established entrepreneurs, no savings and was heavily damaged during the war. Questions were also asked about the compatibility of the Chinese culture with the requirements of modern capitalism. However, the experience of Taiwan is a monument to “Confucian capitalism”.

The remarkable success story of Taiwan with its 20 million people even surpassed that of Japan and South Korea. After a process of serious soul-searching, they identified the societal pathology they wanted to avoid methodically: corruption, inequality, arbitrary government power, hyperinflation and failure to embrace modern science and technology. On the positive side, they aimed at creating an anti- corruption ethos, creating an environment for entrepreneurs to flourish, use planning to create a market system, to depoliticise the economic system, to invest in good infrastructure and to promote exports of manufactured goods. The government supported export industries through low-cost loans, enterprise zones and aggressive sourcing of technology. The results were spectacular.

The Taiwanese success was assisted by their reliance on the many overseas Chinese and many Chinese who had gone abroad for their education. These overseas Chinese were turned into a “brain bank” and an effective network for technology transfer. Taiwan also had the good fortune of enjoying the inputs of a few “supertechnocrats” K.Y. Yin and K.T. Li who acted like Confucian advisors for several decades.

K.Y. Yin was an electrical engineer who read economic texts and K.T. Li, a graduate in nuclear physics from Cambridge, who believed in “replacing the arbitrary political power of government with the automatic adjustment mechanism of the market”. They adopted the Japanese policy formula: “competing out and protecting in” – on condition that Taiwanese firms should be gradually subjected to the rigors and tests of international competition in their home market. They also insisted that

166

Taiwan should aim to continue the transition from authoritarianism to democratic rule and the broadening of the middle class.

The Kuomintang established by Chiang Kai-shek kept a tight grip on power for decades, appointing rather than electing the president. The biggest challenge was to define Taiwan’s relation to the People’s Republic of mainland China. Taiwan has remained by far the biggest investor in mainland China, despite the fact that the People’s Republic regards Taiwan as an errant province that need to be regathered. The Taiwanese insist that their distinct status ought to be respected and that the gulf between them would be narrowed as China’s brand of socialism becomes more like Confucian capitalism. Singapore and the Overseas Chinese

Singapore started its life as an entrepot centre for the region as a British colony. Its economic architect, Dr. Goh, was sent to England where he earned a Ph.D. at the London School of Economics. He teamed up with Lee Kuan Yew who went to Cambridge University and came back to lead the anti colonial movement. After an unsuccessful effort to form a federation with Malaysia, Lee Kuan Yew led Singapore to become a city-state. Lee Kuan Yew and Dr. Goh formed a formidable partnership with a remarkable ability to adapt their policies to changing situations and achieve an economic growth rate of 7 to 9 percent over four decades.

Both leaders began their careers as committed socialists but turned to market-friendly policies with a strong government say. They forced civil servants to think like businessmen. They financed social services like health care and housing, but tried to foster as much personal and family responsibility as possible. They promoted the Chinese propensity to save through the Central Provident Fund which, at one stage, took 50 percent of all wages. The Fund was used to finance infrastructure, industry, housing and a vast industrial park on a wide expanse of swamp, called Jurong.

They decided no public services should be free: the government is a facilitator, not a provider. It was the agenda keeper, long-range planner, strategic player and the manager of resources. A small elite of bureaucrats, selected on merit, ran the whole system. The whole of Singapore worked like a cohesive company. The basic principles which they selected to underpin its operation were: stable and predictable rules, low inflation, a high savings rate, an anti-corruption ethos and a climate friendly to business.

A basic component of the Chinese culture is the hua ch’ia (“Chinese across the bridge”). It refers to the ethnic Chinese who live, trade, work, invest and collaborate across the East Asian region and across the world. An estimated 30 million Chinese live in South East Asia. They make up around 30 percent of Malaysians, 15 percent of Thailand and 4 percent of Indonesia. In these countries they play an inordinately large entrepreneurial and commercial role (like the Jews in America). They are famous for doing their deals without contracts, lawyers, bankers and consultants – even when large amounts are at stake. Kinship-based rules of the game assume the role that contract law performs elsewhere, facilitating trade, investment and the movement of capital. (See Yergin and Stanislaw, op.cit. p.191)

Mainland China’s Suppression and Revival of Confucian Capitalism

The establishment of the People’s Republic of China in 1949 was expected to launch China’s economic modernisation, but Mao Zedong’s revolutionary command did more to retard than to advance

167 economic progress. From the late 1970s, however, China managed to pursue a path of economic liberalisation with outstanding success although its internal politics remained repressive.

China’s modern history was marked by a century of internal conflict and disintegration which was exacerbated by external aggressors. The Opium Wars waged by Britain in the 1840s, forced the Chinese Emperor to allow free trade access to British traders who wanted to sell opium to the Chinese. Britain secured control of Hong Kong for a period until 1997 and provided a secure trading outlet and a safe haven for assets of businessmen and industrialists. The revolution of 1912 that overthrew the Qing dynasty led to turbulent decades in which southern China was the terrain of raging battles among nationalists, communists and warlords of varying allegiances. The warlords were battling for control of various regions and the nationalists and communists were initially collaborators and shared objectives to modernise China. Wealthy nationalists were financing the training of young revolutionaries in Moscow with the hope of restoring China’s dignity. Deng Xiaoping, Zhou Enlai and Chiang Kai-shek’s son, Chiang Ching-huo, all studied at the Sun-Yat-sen University. They returned to China where Deng became chief secretary of the Central Committee of the Communist Party at the age of 23.

The alliance between communists and nationalists broke down as they competed for power. Mao led the Long March of 1934-35, the 6000 mile trek to escape the nationalists. It began with 90,000 communist soldiers and ended with 5,000. This march later provided the leadership cohesion that carried the communists to victory and rule over all of China.

The Japanese invasion of 1937 created the circumstances for the strengthening of communist’s power vis-a-vis the nationalists. Deng Xiaoping played a central role in the Huai-Hai campaign that broke the back of the nationalists in 1949. It destroyed a nationalist army of 500,000.

As early as 1949 Deng Xiaoping saw the need to design a new system built upon “capitalist production” where people who are lazy and unenthusiastic about work should suffer. Deng realised however that a well organised party is a necessary instrument of modernisation. The state that was created by the Chinese Communist Party (CCP) under leadership of Mao Zedong was a totalitarian one-party dictatorship. It displayed all the typical characteristics found in previous examples of totalitarianism: nationalist or socialist ideology, utopian goals, conflagion of party and state, bureaucratic rule, leader glorification, mass propaganda, dialectical enemy as motor for psychological hatred, pre-emptive censorship, genocide and coercion, collectivism, militarism, universalism, contempt for liberal democracy and moral nihilism. Mao was inclined towards mass mobilisation projects such as “the Great Leap Forward” which ended in a devastating famine. Farmers were herded into regimented communes and families were encouraged to start backyard pig iron furnaces – all intended to quickly catch up with the capitalist world. Millions of people died of starvation as industrial production and internal trade collapsed.

Deng Xiaoping was one of the chief figures to pick up the pieces and to replace mass mobilisation with systematic, gradual investment. Deng was ideologically more pragmatic saying that “It doesn’t matter whether a cat is black or white so long as it catches mice”. This pragmatism was held against Deng when Mao launched his “Cultural Revolution”. Mao was dissatisfied with the lack of ideological zeal in the country and that he was no longer receiving the veneration due to him as paramount leader. The senior leadership of the Party was under suspicion. The bible of the Cultural Revolution was Mao’s Little Red Book which Deng unceremoniously turned away. Deng was attacked as a “capitalist raider”, was kept in solitary confinement for two years and then he and his wife were put to work in a tractor

168 repair plant. His son was paralysed as a result of a physical assault by Red Guards. What saved Deng’s life was his network of friends, including Zhou Enlai.

After the Cultural Revolution, Deng came back into the leadership and helped direct the economic recovery based on the principles he believed in: education and economic incentives rather than ideology and exhortation. With Mao against him he was again stripped of power and he was forced to sign yet another document confirming his self-criticism. He was described as “poisonous weed” who was trying to undermine the glorious revolution. Fortunately his old comrades from the army gave him protection.

The death of Mao in 1976 liberated Deng. The “Gang of Four” who masterminded the Cultural Revolution, including Mao’s wife, were arrested. Deng was then engaged in a bitter power struggle against Mao’s anointed successor, Hua Guofeng. By the end of 1978 Hua was defeated and Deng emerged as paramount leader. Out of the pieces he would lay the foundations for China’s real great leap forward.

December 1978 became a major turning point in 20th century Chinese history. A fundamental decision was made at the third plenum of the 11th Congress of the Chinese Communist Party to reorient China toward a market economy. There was no grand plan, but rather a few practical steps to break with Maoism. Whatever worked economically was good, results were what counted. Deng wanted to create a wealthy and powerful China, not a utopian or messianic paradise. Deng was essentially a Chinese nationalist who used communism and the party as mechanisms to reach his objectives. Deng declared “I have two choices. I can distribute poverty or I can distribute wealth”. He had seen enough of the former under Maoism. (See Yergin & Stanislaw, op.cit. pp.196-198)

Deng’s initial reforms centred on agriculture. He introduced a revised “household responsibility system” to allow a family to keep some of the benefits of their labour. Material incentives replaced the Maoist strictures. The commune system and collectivisation were undone. Peasants had to deliver a certain portion of their production to the state. Above that they could keep the output, consume it or sell it. With that free enterprise was launched.

The results were stunning. Output increased by 50 percent over sixteen years. Markets in agricultural products generated an entire new trading apparatus: transportation, repairs, food markets, building and hiring workers. By 1990 as much as 80 percent of agricultural output was sold in open markets. In six years, the real income of farm households rose 60 percent. The rapid improvement in agriculture spilled over into other economic reforms. It provided momentum for the next steps in the remainder of China’s economic life.

The leadership under Deng embraced economic reform and liberalism even while striving to maintain political control. Deng sought to ensure those who were concerned that China would loose its socialist and communist moorings with the explanation that what was happening in China was a process of “building socialism with Chinese characteristics”. That became the title of a book he published at the end of 1984.

To transfer his “incentive” scheme into the industrial sector, Deng introduced the “contract responsibility system”, which echoed the agricultural “household responsibility system”. It meant that state enterprises were allowed to keep earnings above a certain target. By December 1987, 80 percent

169 of China’s large and medium-size firms had adopted such a system. But state-owned firms remained inefficient. The next reform measure involved the creation of property rights. Only ownership could introduce responsibility into decision making and channel motivation.

Deng then turned to building up industries geared to export, particularly in coastal regions. This meant adopting the export-led growth strategy that the Chinese could see working in Japan, south Korea and Taiwan. It offered solutions to multiple problems. These industries would earn hard currency, absorb surplus labour coming from the country’s interior. It meant taking part in international competition and pushing the coastal regions into the international market. At the centre of the strategy were Special Economic Zones (SEZs). These SEZs engendered China’s engagement with the world economy. The first SEZs were created in 1980 in Guangdong province, including Shenzhen (near Hong Kong) and in Fujian province, across from Taiwan. They were essentially export-processing zones and acted as magnets to draw in foreign investment. From then on the coastal cities drove the Chinese economy forward.

In April 1989 Deng experienced a serious setback when the democratic aspirations among students brought thousands of demonstrators to Tiananmen Square to mourn the death of the purged reformer Hu Yaobang. To “old guard” members such as Deng it was an act of rebellion challenging the sacred supremacy of the Party, which they considered the bulwark against disorder and chaos. It reminded Deng of the Cultural Revolution and its militant students. He felt that survival and order were paramount and authorised the use of the army to quell the demonstrations. About a thousand people are thought to have been killed. Retrenchment and controls were stepped up in view of the collapse of communist regimes in Eastern Europe at the time. Chinese conservatives decided to rein in reform and reassert control. Deng’s opponent, Chen Yun, was in ascendency again and “Chen Yun Thought” was celebrated similarly to “Mao Zedong Thought”. Chen attacked Deng directly, charging that his policies were responsible for the trends that had culminated in both the overheated economy and the events in Tiananmen Square. Chen charged that the SEZs were capitalist in character and conduits for forces that would destroy communism in China.

Pushed into the defensive again, Deng undertook his last campaign, the nanxun (southern journey) when he spent four weeks in his private railway car, slowly observing what was happening in the Pearl River delta in Guangdong province and the Shenzhen SEZ which borders Hong Kong. When he saw the enormous changes from what he had viewed in 1984, he realised that despite the growth problems, the resulting high-rise urban area was a stunning success – he called it a “flying leap” and a model for the future. He remarked that socialist systems can have markets too and that plans and markets are simply economic stepping stones to universal prosperity. He advised his fellow Party members “... to watch out for the Right, but mainly defend against the Left”. Replying to Chen’s reading list of communist classics, Deng said that he never bothered to read Marx’s Das Kapital because he neither had the time nor the patience. His nanxun report became the subject of extensive press coverage and much discussion. At the 14th Party Congress in 1992, Deng’s “brilliant thesis” was hailed and it was decided that China would shift from a “socialist planned commodity economy” to a “socialist market economy”. Reform was back on track and it was Deng’s final victory. At 88 he was reaffirmed as the paramount leader.

Between 1978 and 1995, China’s economy grew at an average annual rate of 9.3 percent. It also moved from a Soviet-style command economy toward being governed by market forces. “Collective” enterprises, owned by villages and localities and the army continued to operate next to completely private enterprises. Foreign investment, particularly from Taiwan and from overseas ethnic Chinese

170

(called guanxi) played an important role, particularly in Guangdong and Fujian provinces. A large proportion of the overseas Chinese trace their origins to Guangdong and they have, in recent years, invested billions in the province and assisted in the marketing of their products all over the world. Between 1978 and 1993, Guangdong’s economy has grown at around 14 percent, well above the national average. The Pearl River delta’s growth was still higher at more than 17 percent. A region that hardly contains 2 percent of China’s population, generates around 40 percent of China’s exports.

Deng Xiaoping’s influence in China’s rebirth can hardly be over-estimated. As a boy he started in a Confucian school, and throughout his long life, he was fond of quoting Confucian aphorisms. He shifted the Chinese Revolution away from ideology toward the more pragmatic objectives of wealth creation as a source of power and welfare. At the Central Party School in Beijing, familiar courses on Marxism, Leninism and the history of Communism have given way to courses on marketing, accounting and international business practices. Deng Xiaoping died in 1997 at the age of 93 at the end of a remarkable life.

Concluding Remarks

The wellsprings of growth and transformation in the East Asia regions included several forces: sound macroeconomic policies in the sense of getting the fundamentals right, incentives for increasing productivity, openness to foreign ideas and technology, an export orientation and the development of human resources. A positive government role was crucial. It was buttressed by a leadership committed to a broad-based development and an efficient bureaucracy. Government did not act as planner or owner of industrial enterprise, but as a guide and facilitator, developing infrastructure and a framework for effective policy implementation, encouraging the accumulation of physical and human capital and allocating it to productive activities. International competitiveness was recognised as the ultimate aim and it relied on the private sector as the engine of growth. The east Asian success stories demonstrated that targeted and controlled government activity can be beneficial to the common good – government involvement is not bad per se, but should be kept within proper limits. China’s spectacular growth was only made possible after the retreat of the suffocating role of communist collectivism.

The influence of cultural factors is well illustrated by the influence of the Confucian tradition. Under Confucianism, government has an absolute right to regulate all aspects of social and business relations for the common good. This may explain why there was no jury system, little right of appeal – even in commercial law cases. There is a stark contrast with the Western legal tradition, based on individual rights and freedoms, which dates back to the Enlightenment - but is now well-entrenched in “the rule of lawyers and lobbyists”. In the West a large proportion of political leaders are trained as lawyers, while in East Asia a large proportion of leaders are trained as engineers. In countries with a Confucian tradition such as Japan, Korea, Taiwan, China and Indonesia, the freedom of action of a person or a company stems not from a fundamental right, but is based upon the “grant of benefit” from those in power. It must be pointed out, however, that these cultural traits were expressed in various forms and proved to be subject to change and modernisation.

171

Bibliography

Blainey, G. (2011) Christianity – A Short History, Penguin Books, London Ferguson, N. (2011) Civilization – The West and the Rest, Allen Lane, London Johnson, P. (2003) A History of the Jews, Phoenix Press, London Hattstein, M. (1998) World Religions, Krönemann, Cologne Huntington, S.P. (1996) The Clash of Civilizations, Simon & Schuster, New York Yergin, D. & Stanislaw, J. The Commanding Heights – The Battle Between Government (1998) and the Marketplace That Is Remaking the Modern World, Simon & Schuster, New York Zaehner, R.D. (ed) (1997) Encyclopedia of the World’s Religions, Barnes & Noble, New York

172

11. China’s Spectacular Rise as an Emerging Giant (November 2013)

China covers an immense area of 9,596,961 square kilometres (slightly larger than the USA) and its population is estimated to be around 1400 million. The Han Chinese comprise 92 percent of the population and the remaining 8 percent include several minority ethnic groups – each more numerous than the populations of more than 20 countries represented in the United Nations. With good justification, Mao Zedong once said that China was like another United Nations. China’s largest city, Shanghai, is estimated to have a population of around 25 million people. The most populous province, Sichuan, has almost 120 million people – almost as much as Japan. The provinces Guangdong, Hubei, Hunan, Jiangsu, Shandong and Henan each have populations in excess of 60 million people. China’s population density must be observed to be comprehended!

Historical Background

The earliest human settlements in China are traced back to the Shang dynasty around 2000 BC. The early Chinese invented many things including metal-working and writing. The Shang dynasty controlled the northern part of China along the banks of the Yellow River. The Zhou dynasty, which took over from the Shang in the 11th century BC, introduced coins, used the crossbow and farmed the fertile lands around the Yellow River growing millet, wheat and rice. They also domesticated pigs, cattle, sheep and dogs. When the Zhou dynasty ended, the kingdom entered the “Warring States” period with local rulers and noblemen in constant battle until the first Qin emperor managed to unify the warring units around 220 BC. A succession of many dynasties brought China into the modern age.

The era of dynastic control ended when Sun Yatsen and fellow revolutionaries overthrew the imperial regime of the Manchu dynasty in February 1912. A succession of unstable regimes and civil war followed. Sun Yatsen’s Kuomintang (KMT Nationalists) managed to establish a government over large parts of the country in Canton in 1919 until he was defeated by the Communists under Mao Zedong in 1949.

The Communist Party (CCP) was founded in 1921. By 1927 it controlled large tracts of territory in the south as its armies carried on an armed struggle against the Kuomintang’s rule led by Chiang Kai-shek. In 1931 the Japanese occupied the part of China known as Manchuria in the north-east and in 1933 gained control over large parts of the eastern coastal regions. When the KMT managed to push the Communists out of their southern bases in 1934, the CCP undertook its famous Long March (1934- 1935) through Southern and Western China, finally ending at Yanan in the Shanzi province where Mao Zedong emerged as the CCP leader. Although Chiang Kai-shek and Mao Zedong joined forces in 1936 to fight the Japanese occupiers on the eastern seaboard, Japan retained its stronghold of much of North- East China and established puppet governments in Beijing and Nanking. The Nationalists were forced to retreat to the Chongqing area. It is important to note the deep resentment ingrained in the Chinese folklore against the decades of Japanese occupation in the years preceding the demise of the Japanese Empire in 1945.

After the Japanese surrender in 1945, the Kuomintang Nationalists took control of Japanese-occupied areas. The US attempted to mediate a negotiated settlement between the KMT and the CCP, but Beijing fell to the Communists in January 1949 (The People’s Liberation Army). The People’s Republic of China was formally established October 1st, 1949. The Communist forces marched south and stopped at the border with Hong Kong. In October 1951 Tibet was re-annexed, but Taiwan and surrounding islands

173 were left in the hands of the KMT. China signed a friendship treaty with the USSR and the USA became the protector of Taiwan.

Under Communist Party rule between 1950 and 1958, a series of land reforms were implemented in China involving the redistribution of land among landlords, rich peasants, poor peasants and the landless. Agricultural co-operatives were formed which were later amalgamated into communes of about 200,000 people. The 1954 constitution organised China into five autonomous regions, 23 provinces (pro forma including Taiwan), 175 municipal administrations and 2,000 districts. A central council under Mao Zedong, formed by the Communist Party, held absolute power. Zhou Enlai became Premier of the Administrative Council. In May 1957 the “Hundred Flowers Campaign”, ostensibly organised to invite constructive criticism by intellectuals, culminated in the “Anti-Rightist Campaign” against those who had spoken out. A cruel movement to root out the “White Flags” followed.

The 1958 “Great Leap Forward” launched by Mao was intended to mend fences and to mobilise the support of the masses for his economic policies. Farmers were herded into regimented communes and backyard pig iron furnaces became the symbols of the Great Leap. It proved to be a disaster. Millions of people died of starvation as the disrupted agricultural and industrial production and internal trade plummeted. Refugees streamed into Hong Kong and Mao’s standing deteriorated.

In 1964 China detonated its first nuclear bomb and by 1966 Mao Zedong introduced the “Great Proletarian Cultural Revolution” as a means to bolster his grip on supreme power. The charter of the “Cultural Revolution” allowed the masses to attack those in authority and the “Red Guard” was formed to eliminate unacceptable “old ways”. Red Guards proceeded to put numerous local authorities on trial. A Central Cultural Revolutionary Committee was formed with Chen Boda, Mao’s secretary as chairman and Mao’s wife, Jiang Qing, as vice-chairman. The Revolutionary Committee, together with the Military Commission and the State Council, ruled China under Mao’s guidance. The Red Guards ran riot through cities, ransacking property and humiliating foreign diplomats. Mao’s Little Red Book served as the bible of the Cultural Revolution. In 1967 Mao used the army to restore order and purged all authoritative bodies of members considered by Mao as undesirable and by 1969 Mao strengthened his influence as chairman of the Communist Party and the Central Committee. The army now filled the majority of the posts in the Central Committee, the Politburo as well as provincial and local councils.

Mao Zedong died on 9 September 1976. After an intense internal power struggle in the CCP, Deng Xiaoping emerged as the new leader in 1978. Deng Xiaoping was the son of a prosperous landowner – turned local-government official. As a boy he attended a traditional Confucian school, but amid the tumult following the Chinese Revolution of 1911, he proceeded to France where he met Zhou Enlai and Ho Chi Minh (from Vietnam) and became exposed to the communist movement which was fashionable amongst students and intellectuals in Paris. Deng also studied in Moscow where the Comintern, Stalin’s international apparatus, was teaching students revolutionary strategies. When Deng returned to China as a convinced communist, his organisational skills and sharp intellect carried him forward to become chief secretary of the Central Committee of the Communist Party at the age of 23. He allied himself with the Mao Zedong faction and took part in the Long March of 1934-35. His later wartime role carried him into a key position when the Communists came to power in 1949.

During Mao Zedong’s rule, Deng Xiaoping’s fortunes fluctuated between key roles and imprisonment. The ideological revolutionaries considered Deng as a “capitalist raider”. He was protected by his personal friendship with Zhou Enlai and his networking ties with the army. After the Cultural Revolution had run its course, Deng came back into the leadership circles. He believed in education

174 and economic incentives as the road to development and modernisation rather than ideology and exhortation.

In December 1978 the Third Plenum of the 11th Congress of the Chinese Communist Party made the fundamental decision to re-orient China toward market economics. They made a break with Maoism and adopted instead a strategy of pragmatic adjustment – practical steps that deliver economic results as long as the party remained in control. Said Deng: “We have two choices – we can distribute poverty or we can distribute wealth”.

In retrospect it could be said that China became a modernising totalitarian communist dictatorship by combining the techniques of totalitarian control with pragmatic application of Communist doctrine. Under Mao Zedong it used totalitarian control to establish the power base of the Communist Party and then, later, after much bloodshed and suffering, introduced pragmatic adjustments under the leadership of Deng Xiaoping in 1979 which paved the way for China to become the emerging giant of the 21st century.

Deng Xiaoping’s Reforms

The initial reform effort centred on agriculture in view of the dismal results produced by Mao’s collectivised agriculture system. During 1978 Anhui province experienced a severe drought so that agricultural output was further diminished and starvation became endemic. Diseases swept through the region and people flocked into Shanghai.

People appealed for a return to the “old ways”, meaning the “household responsibility system” which allowed a family to keep some of the benefits of its labour. The peasants got their wish and the “household responsibility” system was adopted throughout the country and material incentives replaced the Maoist strictures. With the communal collectivised system undone, each family could take responsibility for the land it tilled. They had to deliver a certain amount of their production to the state, but the portion above that they could keep for own consumption or sell. Thus the first steps towards free enterprise were taken.

The results were encouraging. Over the next sixteen years output more than doubled. The introduction of markets in agricultural products generated a new trading apparatus. Farmers became involved in transportation, home building, repairs, private food markets and hiring workers. The changes created a whirlwind of entrepreneurship. In 1978 just 8 percent of agricultural output was sold in open markets; by 1990 the share was 80 percent – raising the real income in farm households more than 60 percent.

The rapid improvement in agriculture spurred economic reforms in other areas. It created a pro- reform constituency, not only among farmers but also among city dwellers, who could find more food and more variety in the marketplace. These improvements created a momentum for additional reform such as the deregulation of prices. The reform process was set into motion with a clear winner.

The reform of the industrial sector was more difficult. It was highly interconnected, controlled from the centre, the scale was large and it generated much of government’s revenues. Change in the system could throw the country into economic disarray. Moreover, Marxist economics was focused on industrial production. The desperate need of reform in the industrial sector spurred an acrimonious debate over the relationship of the state and the marketplace. One argument was that the way the state collected revenues from enterprises ended up “whipping the fast ox” – punishing firms that were

175 more efficient. The higher the firm’s profits, the greater the proportion of profits taken away by the government. Arguments were made for increasing the autonomy of enterprises and moving the system towards “market socialism”.

At the Wuki Conference in 1979 economists gave expression to the general sentiments by stating that China “… cannot allow Adam Smith’s invisible hand to control our economic development (because) … if individual consumers in the market make decisions based on their own economic interests, this will not necessarily accord with the general interest of society”. Planning had to be made more effective and giving over to the “blindness and anarchy of capitalism” was to be avoided. But the leaders were ambivalent about the legacy of Mao and their perception about the amount of change the system could absorb. Deng went along with the cautious re-adjusters. He argued that the CCP was essential to the central goal of modernisation. He agreed that without such a party, China would split up into chaos and accomplish nothing.

By the middle 1980s the “go slow” argument was loosing its credibility. The economy was growing much faster than anticipated without the severe problems that Chen Yun, the Party’s expert on economics, had forecast. Improvement in agriculture stimulated the emergence of rural industry and commerce. Reform now had both a constituency and a track record. Chinese economists took notice of developments in Hungary which involved experiments with market mechanisms as reflected in the influential writings of Hungarian economist Janos Kornai. But the most pressing example came from their nearby neighbour, Japan. Visiting Japan and seeing its dynamism firsthand shocked the Chinese Communists. The head of the CCP’s propaganda department noted that one in every two households in Japan owned an automobile; over 95 percent of households possessed TV sets, refrigerators and washing machines; the majority owned a variety of clothing and changed clothes every day!

Socialism with Chinese Characteristics

By the mid-1980s, the Chinese economy entered a period of high-speed growth. The leadership of Deng Xiaoping embraced economic reform and liberation even while striving to maintain political control. To appease his comrades who feared seeing capitalism replacing the socialism and communism they had strived for all their lives, he described what was happening as “Building Socialism with Chinese Characteristics” – which became the title of a book he published at the end of 1984.

Deng was constantly harassed by the opposition and rivalry of Chen Yun. Both were veterans who joined the CCP at the beginning, both were victims of the Cultural Revolution, both were intent on redressing the deep wounds inflicted upon society by Maoism, but they disagreed over the terrain of the reform package. Fortunately for China, Deng’s pragmatism carried the day – often by adjusting terminology: replacing “hired labour” (which sounded bad on Marxist ears) with “asked-to-help labour”!

From 1984 the debate about the future moved on beyond Marxist ideology to the practicalities of creating a market economy. Economic data replaced Marxist catechisms in arguments about market allocation of resources versus planning. Deng was the paramount leader of reform, while Chen served as the paramount critic of change. At the heart of the debate was the question of the proper relationship between state and market.

The Chen-hardliners wanted to reassert centralisation, stabilisation and mandatory planning because they feared chaotic dislocations and inflationary pressures – and loss of political control by the CCP.

176

Deng also feared the erosion of CCP-control, but wanted to reduce bureaucratic control by party secretaries and instead make enterprises responsive to market signals. The introduction of the “contract responsibility system” (echoing the “household responsibility system”) allowed state enterprises to keep earnings above a certain target. By December 1987, 80 percent of China’s large and medium-sized firms had adopted such a system.

The reforms were still inadequate to sufficiently reduce the inefficiency of state firms. They were losing out on the growing competition from new companies established by local villages and towns. It became clear that the most important missing element was property rights. Only ownership of assets and the rewards of labour could introduce responsibility into decision making and channel motivation. So the debate moved from Marx to Mao and ultimately to Hayek.

Deng was essentially interested in results which for him meant increasing China’s wealth and power. But Deng was constantly compromised by Chen’s pressure to step on the brakes and to oppose the reformist incumbents of key positions. Deng was forced by Chen pressure to remove CCP general secretary Hu Yaobang on account of him being regarded as too liberal. He was replaced by Deng with Zhao Ziyang who nevertheless promoted the idea to build up new industries geared to export, particularly in the coastal areas. The benefits of focusing on export industries were well illustrated by the success of neighbouring East Asian economies. It offered the solution to multiple problems: earning hard currency and absorbing surplus labour coming out of inland regions.

The central focus of the strategy would be Special Economic Zones (SEZ’s) to engage with the world economy. The original SEZ’s were created in 1980, some in Guangdong province, including Shenzhen (close to Hong Kong) and others in Fujian province, across from Taiwan. The whole orientation of export-processing zones was outward. They were also meant as magnets to draw in foreign investment. Beijing gave local authorities in the SEZ’s unprecedented autonomy in trade and investment decisions. From then on the Chinese economy was driven forward by the coastal cities.

Accelerating inflation by the end of 1988 forced general secretary Zhao and his allies to go on the defensive. A “Mao Zedong craze” was unleashed which forced the reformist leaders onto the back foot. They were accused of “capitalist-style crime and corruption”’ and being materialistic drivers of bourgeois inequality and democracy. Although Deng remained reform’s prime cheerleader, a widespread anticipation in 1988 of price reform, ignited a run on the banks and a panic buying of goods. Deng’s government was shaken and it adjusted course by turning the focus to stability.

The Tiananmen Square Clampdown

Thousands of students occupied Beijing’s Tiananmen Square in April 1989, mourning the death of Hu Yaobang, the deposed reformist CCP general secretary and expressing their displeasure with the suppression of the democracy movement. To the hardliners, it was an act of rebellion – the consequence of too much reform and too little control. To people like Deng, it challenged the sacred supremacy of the CCP which the old veterans considered as the bulwark against disorder and chaos. To Deng it also resembled militant mass action during the Cultural Revolution. He feared that the leadership core could be in danger. The events in Poland and other East European communist states convinced Deng that concessions could lead to more demands and that more concessions beget more chaos.

Tiananmen Square carried a lot of symbolism in the history of the CCP. In 1949, Mao declared victory and the establishment of the People’s Republic of China on Tiananmen Square, and thirty years before

177 that, on May 4th 1919, it had been the scene of the nationalist student demonstrations that paved the way for the birth of the Chinese Communist Party. Hence, in June 1989, the order was given to the military by Deng to crush the protest. About a thousand people are thought to have been killed in the ensuing struggle.

Zhao Ziyang, who was Party Secretary at the time, tried to peacefully disperse protesters in Tiananmen Square before the fatal confrontation with the military. Mr. Li Peng, who was Prime Minister at the time, forced Zhao to defend his actions to a disciplinary meeting of the CCP’s Central Committee. Zhao was put under house arrest, where he stayed for the rest of his life. His memoirs, dictated on tapes, were smuggled out of China and were published in 2009 under the title Prisoner of the State: The Secret Journal of Zhao Ziyang (Simon & Schuster). In his memoirs, Zhao describes Deng as the enabler, not the architect of detailed reform measures. Zhao credits himself as the real architect of China’s reforms in the 1980s, with Deng helping him to keep the hardliners at bay.

The collapse of communism in Eastern Europe and eventually in the Soviet Union in the period 1989- 1991, reinforced the position of the CCP hardliners to rein in reform. Economic growth slowed and dissent was stifled. Although Deng remained in power he was by then advanced in age and of frail health. Reform was in retreat and so was Deng’s influence. His old rival Chen Yun was in ascendancy again and “Chen Yun Thought” enjoyed comparable prestige to “Mao Zedong Thought”.

Deng’s Legacy

Despite the temporary pause in the reform momentum, the 14th Party Congress in 1992 affirmed a new commitment to reform. It was explicitly decided that China should shift from a “socialist planned economy” to a “socialist market economy”. At the ripe age of 88, Deng reaffirmed his position as paramount leader.

Deng indicated that Guangdong should be considered the engine of China’s growth and that China should overtake the four tigers – Korea, Taiwan, Singapore and Hong Kong – within 20 years. In reality China was already on its way. Between 1978 and 1995, under Deng’s rule, China’s economy grew at an average annual rate of 9.3 percent. Even in semi-retirement, he ruled China from his modest Beijing courtyard home. Leaders still vied for his attention, although his growing deafness made communication difficult.

Deng Xiaoping died early in 1997 at age 93. At his funeral Chinese President Jiang Zemin referred to Deng’s “three rises and three falls”. But Deng broke with conventions and paved the way for China’s ascendancy as a world power. When he came to power China was desperately poor, but he launched a growth path that enabled China’s foreign trade to increase from $36 billion in 1978 to $300 billion in 1995. Per capita income doubled between 1978 and 1987 and doubled again between 1987 and 1996 – a rate almost unheard of in modern history. Deng lifted upward of 200 million people out of poverty in just two decades.

At Deng’s funeral president Jiang declared that henceforth “Deng Xiaoping Theory” would be the “guiding ideology” of China and, alongside Marxist-Leninism and Mao Zedong Thought, it would be the Party’s “guide to action”. The only way Mao’s theory could be made compatible, is through a healthy dose of pragmatism. By lifting the banner of Deng Xiaoping high, the communist leadership enshrined his adherence to the principle of pragmatism.

178

After his death, a Shanghai newspaper reported a generally unknown fact about Deng’s career. While Deng was studying in Paris to become a communist in the early 1920s, he also opened a restaurant called “China Bean Curd Soup”. It was reported that the bean curd was good, the restaurant was a success and Deng expanded both his menu and his seating space. It was also reported that Deng was fond of quoting Confucian proverbs and teachings.

Hong Kong’s Crucial Role

Hong Kong was part of Chinese territory ceded to Britain in 1842 after the Opium Wars. In 1997 it was returned to China and placed under control of a Chief Executive appointed by Beijing. During British rule, thousands of refugees from China entered Hong Kong in 1911, 1937 and in 1949. During the period 1941-1945 it was invaded and occupied by Japan.

Down the years Hong Kong offered a secure trading outlet as well as a safe haven for the assets of Chinese businessmen and industrialists. Over the years Hong Kong acquired a business community with advanced education, entrepreneurial skills and a network of connections with mainland China that were particularly advantageous to British interests, but it also provided an important commercial outlet for China and an avenue for importing foreign investments and technology into China. The investments of displaced Chinese and the availability of cheap labour fostered a mushrooming of local assembly plants, textile workshops and factories for light manufactures. Economic life was freewheeling. There were no trade or exchange restrictions, no central bank, labour legislation was light and taxes were low.

In the 1980s Hong Kong was closely linked to Deng Xiaoping’s reforms on the mainland. It re-opened the door to travel, trade and investment across the border. By establishing the first SEZ’s near Hong Kong in Shenzhen, Deng facilitated investment into China’s vast pool of labour and resources. Labour- intensive production was shifted onto the mainland turning the Pearl River delta into a megalopolis with Hong Kong and Guangzhou as its twin poles.

Hong Kong also emerged as one of the world’s pre-eminent financial centres in the 1980s. With all the major trading houses established in the city, the Hong Kong stock exchange became a major source of financing and investment expertise for mainland enterprises. Hong Kong also served as clandestine conduit for funds coming from Taiwan and the millions of overseas Chinese (called “guanxi”). The guanxi provided important trade and marketing channels for Chinese exporters.

Long before the Chinese take-over in 1997, China’s state-controlled firms invested heavily in Hong Kong real estate. The state-owned Bank of China built one of Hong Kong’s most distinctive harbour- front skyscrapers. Even before the handover, Hong Kong’s wealth – in per capita terms – was significantly more than the UK. Deng Xiaoping left his successors with an easy pragmatic solution to the special status of Hong Kong with the guiding concept “one country, two systems”.

Today Hong Kong ranks as one of the most modern cities in the world. It has a population of over 7 million, safe deepwater harbour facilities and one of the most modern airports on the planet.

Trade Patterns

With a trade-to-GDP ratio of around 75 percent in 2005 and a large volume of foreign investment, China has become one of the world’s most open economies. It means that the sum of its total exports and imports of goods and services amounts to around 75 percent of China’s GDP. To focus only on

179

China’s growing share of global output and exports is to miss half the story. China’s imports have risen at the same pace as its exports. Thus China is giving a big boost to both global supply and demand.

The “positive supply-side shock” given to the world economy by China’s entry into world trade patterns, has increased the world’s potential growth rate, helped to hold down inflation and triggered changes in the relative prices of labour, capital, goods and assets. The entry of China’s vast army of cheap workers into the international system of production and trade has reduced the bargaining power of workers in developed economies. It helped to keep a lid on wages.

Not only were the prices of the goods that China exported falling, the prices of the goods it imported were rising, notably oil and other raw materials. China became the world’s biggest importer of many commodities such as aluminium, steel, copper, coal, iron ore and the second biggest consumer of oil. The upward pressure that Chinese imports had on the prices of commodities and raw materials has been offset by the downward pressure of Chinese manufactured exports. Hence China played a significant role in keeping a lid on inflation.

Much of the world’s low-cost manufacturing has shifted to the Chinese mainland. The Chinese manufacturing machine has sucked up vast quantities of raw materials and resources from many parts of the world. This gigantic manufacturing machine produces goods for the domestic market, but also vast quantities for its booming export market. China’s manufacturing machine has also absorbed vast quantities of parts and components for final assembly from other parts of Asia – Thailand, Malaysia, Singapore, the Philippines and Indonesia, as well as richer Taiwan and South Korea. China has been integrated into existing and highly sophisticated pan-Asian production networks.

All members of the pan-Asian network have benefited from China’s rise. Even rich Japan, which in 2002-2003 was pulled out of a decade-and-a-half slump by Chinese demand for top-notch components and capital goods. Economic inter-dependence between the two countries has grown by leaps and bounds since 2003. China has not only lifted Japan out of stagnation, it became its biggest trading partner in 2004. Japan now imports finished goods such as office machines and computers from China. In 2005, the total volume of trade between the two countries was nearly $190 billion. Japan also accounted for 11 percent of foreign direct investment in China, making it the largest foreign investor in China.

South-East Asia also received a major boost from its trade with China. Rich in resources, including rubber, crude oil, palm oil and natural gas, it is likely to profit from China’s appetite for raw materials for a long time to come. For Japan, South Korea, Taiwan, Hong Kong and Singapore trade has turned from the rich world towards China as their biggest trading partner. In China itself, the processing and assembly of imported parts and components now account for more than half of its exports. Much of China’s growing trade surplus can be explained by its trade formula which is based on assembly. Hence China has become a major economic power, not only on account of its fast growing exports, but also increasingly from being a major buyer, investor and provider of aid. China’s huge imports are a major source of influence in many countries. It is the source of a new kind of power: intertwined economic and political power.

China’s presence as a commercial force is rapidly being felt around the world, through its growing investments overseas and through an apparent insatiable hunger for resources to fuel its own industrial revolution at home. Planeloads and shiploads of oil-drillers, pipe-layers and construction workers are sent from China to work on oil rigs or build ports, highways or railways in South-East-

180

Asia, Africa, Latin-America or the Middle East. Chinese workers are also fanning out to neighbouring countries in less formal ways to work on farms, forest plantations and market gardening. Throughout the 19th century, thousands of indentured workers were lured by Chinese and Western recruiters to work on the guano deposits of Peru, the cane fields of the Caribbean Islands or the goldfields of Australia and South Africa. Now the Chinese workers are present in many parts of the world with Chinese capital behind them.

Demographic Patterns

A look at China’s GDP statistics reveals the complexity of making sense out of available data. Economists have long doubted the credibility of Chinese official statistics, particularly the over- statement of GDP growth. Dragonomics, a research firm in Beijing, estimates that GDP growth was 5 percent in 1980, -1 percent in 1990, 5 percent in 2000, 13 percent in 2005 and 8 percent in 2008.

The problem with government massaged statistics is that politically embarrassing bad news is often understated or not published at all. The more eyes there are on China, and the more crucial its economic performance becomes for the rest of the world, the harder it becomes for Chinese officials to tamper with statistics. It should also be noted that GDP figures are significantly distorted by regional variation. National averages conceal regional inequalities such as the poor Guizhou region compared with rich Guangdong. The cities of Shanghai and particularly Hong Kong have high per capita income levels. Besides the regional inequalities, there is a serious wealth gap between city and countryside. Where city dwellers have washing machines and colour TV, the most widely owned consumer durable found in 70 percent of farm households, is likely to be the sewing machine.

Helped by the one-child-per-couple policy introduced in the late 1970s, China has a large working-age population with a small number of dependents. But as the number of young workers start declining the “demographic bonus” will run out. According to the UN’s World Population Prospects (2004 revision), China’s 15-24 age category stood at 18 percent in 1990 and is expected to decline to 12 percent in 2010 and to 10 percent in 2050.

China’s growing dependency ratio could be mitigated by the rising productivity of an increasingly well-educated workforce. It could also be counteracted by productivity gains as more people migrate from rural to urban areas, and by an easing of the one-child policy. China’s population control measures is said to have resulted in some 300 million fewer births in the last 30 years. But while such measures may have helped to ease pressure on scarce resources and reduce poverty, they are also aggravating demographic imbalances that could undermine these gains.

In the next two decades, the proportion of China’s population aged 65 and over will begin swelling rapidly while the growth of the working age population will shrink. If current trends continue, the ratio of working age persons to retirees will fall from six in 2004 to two in 2040. That will impose huge financial burdens to meet pension commitments to the elderly. Urban China, in particular, is facing the “4-2-1 phenomenon” which refers to four grandparents and two only-child parents being supported by only one child.

The sex ratio is also becoming increasingly skewed. Cultural bias in favour of males has produced an officially recorded ratio at birth of 118 boys to 100 girls in 2000. The normal ratio is about 105 to 100. But some births are not recorded, in order to avoid reprisals by zealous family-planning officials. A further distortion is caused by selective abortions.

181

The desire for large families has also been blunted by China’s transition in recent years to a market economy. Health care, education and housing, once provided virtually free, are now costly. Even in some rural areas where authorities have experimented with allowing farmers to have two children unconditionally, parents have shown little inclination to increase their families. But there is a growing feeling that a two-child policy would be more suitable. Rich families are increasingly willing to pay the fines or sometimes even buy in vitro fertilisation treatment. Some try to have a second child abroad, so that the child can get a foreign passport and not be counted by Chinese family-planning officials.

The Chinese government has introduced a new pension scheme whereby it makes investments on behalf of individual workers and then pays them pensions from their individual accounts. This scheme is highly dependent on the development of a mature bond and equity market for its success.

Centralised Government

China’s political system is said to be one of “sharp elbows and centrifugal forces”. But despite the tensions between the centre and the periphery, China’s political system has not disintegrated into a warring quagmire of fiefdoms as happened in Eastern Europe.

At the centre there are perhaps no more than 200 unelected, often elderly party veterans, who have kept control of the country as a whole and of the reform process. They have succeeded in holding on to control by devolving responsibility for economic growth to the local and regional levels, by way of the highly disciplined totalitarian single party system. Party discipline is enforced by keeping tight control over criteria for party membership and especially over the hiring and firing of local and provincial officials.

The collection of tax revenues is centralised, while the granting of credit to localities, regions and state enterprises, is carefully apportioned and rationed. The decision-making process is not subjected to the glare of competitive news media or the pressure of public opinion. Dissemination of information in the public domain, is carefully scrutinised and regulated.

Civil Rights in a Totalitarian Dictatorship

The CCP’s grip on power to date has largely rested on its control of the military and the police forces as buttressed by the benefits of strong economic growth. But in the wake of economic growth normally comes rising expectations on the part of citizens. On the part of external observers, the expectations are for a China not just wedded to capitalism but to the principles of open government, private initiative, sound corporate governance and the rule of law. Elsewhere in the world, rapid economic growth has often gone hand in hand with political change.

However, in China there are no clear indications of the anticipated rising expectations manifested in participatory civic life. Serious political reform does not yet appear on the CCP’s agenda. It appears that there are, as yet, no particular rallying causes active in China today. Sporadic public protests take place in the countryside as well as in towns and cities, but the central government still enjoys considerable support. Angry peasants from time to time direct their resentment at rural authorities rather than at the central leadership of the CCP. The traditional view of government is based on a positive and favourable image of the good emperor – if only you could get through to his office through all the layers of bad bureaucrats. Thousands of people go to Beijing every year to seek redress of local injustices. Their inevitable disappointment does not dim their belief that the local authorities, rather than the top leadership or the system itself, constitute the core of the problem.

182

Outspoken newspapers are summarily closed in order to stifle public criticism of government. Dissidents who have posted their views on the internet are jailed. Party officials are subjected to intensive indoctrination campaigns. Congresses are usually mere rubber-stamp events held every five years to confirm party policy and name new leaders. Inter-personal contests amongst leaders take place behind the scenes and outside the public glare. Only a few isolated broad strands of political movements can be identified amongst intellectuals and party officials.

Growing inequalities of wealth and access to public services have prompted strident criticism of economic “neo-liberalism”. Champions of a more caring, worker-friendly kind of capitalism, are dubbed the “new left”. Wang Hui, editor of an outspoken literary journal “Dushn” propagated the idea that workers should be allowed to have independent trade unions rather than impotent party- controlled unions. He also argued against the way state-owned enterprises are being privatised, fearing that an oligarchy of a wealthy elite controlling the country’s resources might be created, as happened in Russia. Opposition was also expressed by new-left intellectuals against allowing domestic private investment in the state’s hitherto jealously guarded preserves such as energy, railways and telecommunications.

Chinese Law and Totalitarianism

China’s legal structure is loosely based on a civil law system, largely derived from Soviet-age civil code legal principles founded on the political philosophy of state totalitarianism, but also incorporating elements of long-established imperial legal codes. It is essentially controlled within the framework of the one-party rule of the Chinese Communist Party. It means the judiciary is not independent; judicial proceedings are usually not open to public scrutiny and hearings are mostly conducted in secrecy. The concept of state security overrides any competing legal claim; the rules applicable are not transparent or subject to established legal principles.

In the Western World, the “rule of law” is understood in terms of established liberal constitutional democratic principles (involving the sovereignty of law conditioned by the separation of powers, the independence of the judiciary, the safeguarding of individual rights and the audi alteram partem rule). The “rule of law”, in this sense, is not applicable in China. The overriding characteristic is that the dictates and interests of the Chinese Communist Party government enjoy ipso facto pre-eminence and priority over the interests or claims of any individual person or company. It is a totalitarian one-party dictatorship.

The legal drama which involved employees of the international mining company Rio Tinto, in 2009/10, illustrates the opaque nature of Chinese Law. There is a murky line in China between state and commercial interests or “secrets” in industries where government monopolies mean the big players are all state-owned. As a result it is not clear where the line is between legitimate commercial information gathering and “criminal” action to obtain “state secrets”. The same opaqueness applies to the realm of civil rights. Citizen rights are defined and determined by totalitarian Communist Party Rule-making bodies.

Oil, Coal and Pollution

Behind the USA, China is the world’s second biggest oil importer – 40 percent of its demand. Government is committed to a car-led development path, which implies continued growth in oil consumption. Some 45,000 km of expressways have been built or are under construction. The

183 government is also supporting a domestic car industry, which it sees as an engine of future growth. The number of cars in China has leapt from just 4 million in 2000 to 19 million in 2005. This figure is predicted to reach 130 million by 2020.

China is the world’s biggest producer of coal, digging out 2.2 billion tonnes in 2005. Coal also accounted then for 80 percent of China’s energy use. China is also breaking new ground liquefying coal to make oil substitutes, which may in the long run help change its energy mix. But abundant use of coal also meant that China overtook the United States in 2009 as the world’s biggest producer of carbon emissions. In 2007 its share was 17 percent of the world’s total, against the USA’s 22 percent. Twenty of the world’s most polluted cities are in China.

Strategic Issues

The rapid spread of the internet technology in China in recent years, has provided new forums for citizens to air their views. Unfortunately China’s internet sensors closely monitor debate on internal issues so that broad public participation is severely constrained. It also limits the possibility of the free world to penetrate the “bamboo curtain”.

To the disquiet of the Free World, China has followed a consistent path of cosying up to pariah governments around the world – Venezuela, Zimbabwe, Sudan, Iran and North Korea. China imports 11 percent of its oil requirements from Iran – despite efforts from the UN to impose sanctions on Iran to prevent it from developing nuclear weapons.

China’s growing military budget reflects its growing wealth and prestige, along with its desire to protect its rising shipments of oil and other commodities. But China has also test-fired rockets that illustrated that it is able to destroy satellites in space.

China is also building roads, ports and pipelines in Myanmar and Pakistan, connecting west and south- west China with the Bay of Bengal and the Indian Ocean. These links could serve as future supply routes for the Chinese navy. The extension of the Qinghai-Lhasa railway could facilitate the transport of military material to the Tibetan border – and, if necessary put strategic pressure on India.

China’s People’s Liberation Army (PLA) of 2.5 million may be the largest in the world and, assisted by North Korea’s 2 million, constitute an impressive military force. In reality it does not appear to be well trained or well equipped. Its officer class is rife with party and family nepotism. Many of its soldiers are semi-literate rural peasants. China’s military technology also has much catching up to do. It nevertheless is huge in terms of sheer numerical preponderance.

China has been systematically hiding its own history and conditions in the world outside from its population. Much is done to constantly remind the population about the Japanese atrocities during its occupation of China during World War II. But Chinese people are not told that Chinese people under past rulers, committed cruel atrocities on their own people. The civil war between the Communists and the Nationalist Kuomintang (KMT) claimed as many lives as the Japanese occupation. After the Communist victory in 1949 an estimated 2-3 million land owners were killed in the early 1950s and many intellectuals died during the anti-rightist movement of 1957. To cap it all, no less than 30 million fell victim to the famine that followed the “Great Leap Forward” (1958-61). New research also suggests that millions more than first thought died during the state-sponsored anarchy of the Cultural Revolution of 1966-76. By those standards the few hundred killed in the Tiananmen Square massacre in 1989 are considered insignificant by the Chinese Communist Party.

184

Although the power of Communist ideology is much reduced as the young generation immerse themselves in the promise of the new prosperous era, the vacuum appears to be filled with a strong sense of chauvinistic nationalism. For the present that nationalism is focused on the reintegration of Taiwan. It appears that China might hold the preponderance of power to force Taiwan’s return to China’s fold. In the interim, both sides of the divide probably realise that an unleashed conflict is too costly to contemplate.

Reactionary nationalist sentiments have deep roots in the Chinese intellectual tradition. As the global economy sputtered there were many signs of the revival of an extreme fringe group that pines for Maoist egalitarianism, state ownership and anti-West action. A clutch of websites in China are actively spreading pro-communist rhetoric suffused with a sense of China as victim and yearning for revenge. Although reactionary Maoism is not likely to make a comeback soon, its nationalism has a broad appeal.

Boiling up nationalism was also stimulated by China’s sporting triumph at the Olympic Games in Beijing in August 2008. The West presented a gratifying target for pent-up contempt. Even the normally cautious government felt tempted to flex its muscle on the world stage. (See The Economist, “China and the West”, March 21st, 2009, pp.29-31)

For most of the period since 1990, China has played a cautious game internationally. Deng Xiaoping set the tone with his concise guidelines: China should keep a low profile, not take the lead, watch developments patiently and keep its capabilities hidden. The global economic crisis and the West’s obvious weaknesses created new opportunities. In a speech at Cambridge University in February 2009, Wen Jiabao, China’s Prime Minister, stressed that China’s development was no threat to anyone because it is a “peaceful and co-operative great power”. Chinese leaders are careful not to fuel suspicions in the West that China is a threat. China would like to be number one, but would at this stage rather get there without making big enemies.

Prospects

China’s spectacular recovery from the stagnant depth of the Maoist era in the 1945-1975 period is truly remarkable. A recent study by Goldman Sachs projects that China’s economy will be bigger than America’s by 2027, and nearly twice as large by 2050. Some futurologists predict that the world would by then live under a Pax Sinica with the dollar replaced by the renminbi as the world’s reserve currency, and New York and London replaced by Shanghai as the centre of finance. Global citizens will use Mandarin as much, if not more, than English and the thoughts of Confucius will become as familiar as those of Plato. European countries will become quaint relics of a glorious past, like Athens and Rome today.

With the West in financial turmoil and its leaders seemingly desperate for cash-rich China to come to its rescue, Chinese leaders can see many strategic opportunities: to acquire assets at bargain prices and to exploit political vacuums in many international hot spots like the Middle East and Sub-Saharan Africa. It is now the Chinese who are doing the lecturing.

Simplistic political and economic extrapolations, however, do not take into account the many uncertainties and imponderables that can come into play. China, like many other countries, also faces the problem of an ageing population, of expectations rising faster than the capacity of their system to deliver, of the destructive power of corruption and nepotism and the abuse of political power. The

185 mere logistics of governing this vast country with its huge population pose gigantic challenges to whichever system of organisation and management that is brought to the task. China is also heavily dependent on its collaboration with the West: its technology, its markets, its natural resources and its investments.

Optimistic expectations for the emergence of free, open constitutional democracy in China are wishful “thought bubbles”. China has never experienced an open pluralistic society. It has never developed neither the mindset of civic consciousness nor the associational community-based framework to serve as a foundation for a democratic infrastructure. Sporadic and isolated outbursts of discontent may occur, but a deep-rooted democratic transformation appears to be still decades, if not generations away.

In 2012 Xi Jinping emerged as China’s new party chief and state president. He is a “princeling” whose father took part in the “Long March” of 1934/1935 in the company of Mao Zedong and Deng Xiaoping. A better pedigree in China is hardly imaginable. On several public occasions Xi Jinping has indicated that he considers current China at the cusp of momentous change and reform. Many foreign observers and commentators expect the third plenum of the 18th central committee of the Chinese Communist Party to be of seminal importance in determining the direction of China’s future development.

The main areas of reform that are expected to be focussed on are, firstly, the improvement of the management efficiency and accountability of the remaining part of the economy still under control of state economic enterprises (SOEs) and, secondly, land reform in the countryside to modernise the parts of the rural economy still under collective control of local party bosses. It remains to be seen if Mr Xi Jinping will go down in Chinese history as a worthy successor to the trailblazing work that was done by Deng Xiaoping in the period 1978-1994.

Bibliography

Goodman, S.G. (1994) Deng Xiaoping and the Chinese Revolution: A Political Biography, London: Routledge Kaizuka, S. (1956) Confucius, Allen & Unwin, London Lieberthal, K. (1995) Governing China: From Revolution Through Reform, New York: W.W. Norton Redding, S.G. & Wong, G.Y.Y. “The psychology of Chinese organizational behaviour”, (1986) in M.H. Bond, Ed., The Psychology of the Chinese People, Hong Kong: Oxford University Press The Economist (1991) “Where tigers breed: a survey of Asia’s emerging Economies”, November 16th, pp.5-24 The Economist (2000) A Survey of China, April 8th, 2000, pp.3-23 The Economist (2002) A Survey of China, June 15th, 2002, pp.3-18 The Economist (2004) The Dragon and the Eagle, October 2nd, 2004, pp.3-24 The Economist (2006) A Survey of China, March 25th, 2006, pp.3-20 The Economist (2009) Chinese Business, February 21st, 2009, pp.61-64 The Economist (2009) China and the West, March 21st, 2009, pp.29-31 The Economist (2011 Rising Power, Anxious State, June 25th, 2011, pp.3-18 The Economist (2012 Pedalling Prosperity, May 26th, 2012, pp.3-20 The Economist (2013) “The Moment for Xi Jinping to prove that he is a reformer”, November 2nd-8th, 2013, p.13; “Chinese Land Reform”, op.cit. pp.23-25; and “Changing the Economy”, op.cit. pp.33-34

186

12. The Impressive Growth Potential of East Asia (November 2013)

A journey through parts of East Asia is a first-hand exposure to the kaleidoscope of life in the most dynamic region of today’s world. Everywhere masses of people are encountered, building projects abound, quaysides in harbours are packed with mountainous container stacks and the traffic of lorries, buses, cars, motorcycles and bicycles is a nightmare to negotiate. Even crossing a street is an adventure because traffic rules everywhere appear to be optional.

The fourteen countries of East Asia have a total population of around 2 billion of which the vast majority are of Mongoloid origin, i.e. not European (“Caucasian”) or African (“Negroid”). But as any assembly of ASEAN leaders shows, each of these countries has its own distinct physical appearances, national cultural traditions and history. With the Chinese component so numerically preponderant, it is understandable that the Chinese influence should predominate. Strategic pockets of Chinese have settled in Singapore, Malaysia, Thailand, Indonesia and in the Indo-China countries. The “Overseas Chinese” largely represented by internally owned and controlled family businesses play an important role in most South-East Asian economies.

In contrast to the sclerotic economies of the Western world, East Asia continues to experience strong growth and expansion on every economic front – some faster than others – and further along the road to prosperity. Japan, South Korea, Taiwan, Hong Kong and Singapore are the front-runners with colossal China rapidly catching up. The middle ground is held by the aspirationals, Malaysia, Thailand and Indonesia, with Vietnam, the Philippines, Laos, Cambodia and Myanmar following in the rear.

The Growth Momentum of the Trendsetters

The variation in success rates between the various countries inevitably raises many questions. Why are some so much more successful than others? What hinders and what promotes economic success in societies? What are the economic and strategic implications of this transformation process for members of the Western world? What are the lessons to be drawn from the proven success stories of the “Asian tigers”? The factors that promoted or hindered development and growth in the various countries are largely revealed in the history of post-war and post-colonial reconstruction in the period since the end of the Second World War. A few key determinants merit special consideration:

- the pivotal role played by Japan in providing a template for others to follow; - the inherent cultural strengths of these oriental communities based on non-disruptive labour relations and a strong work ethic, strong family ties, a family-rooted entrepreneurship culture, effective business networking and a high propensity to save for domestic business investment; - export expansion policies and strategies that mobilised domestic productive capacity (labour productivity, investment capital, appropriate financial institutions and instruments, balancing market forces and economic planning, promoting an appropriately trained and educated workforce); - relative political stability based on land reform, a modernising political elite with strong technocratic backing to ensure macro-economic stability and policy predictability; and, - interaction with the Western world in the fields of technology, science and trade.

187

Japan’s Post-War Recovery Template

Japan clearly provided the template for the other countries in the region to follow in the era since the Second World War and still leads the field in terms of per capita income and living standards. After Japan was governed by the shoguns for centuries as a feudal reclusive state, the American Commodore Mathew Perry steamed into Tokyo Bay and opened the country to trade. Japan emerged as a rapacious colonial empire, powerful enough to conquer most of the Pacific Rim countries up to the Indian border and then, as part of the Axis Powers, attacked the Americans at Pearl Harbour in 1941. When World War II ended with the American nuclear bombs on Hiroshima and Nagasaki, Japan was on its knees. Its economy was shattered and its infrastructure devastated. Reconstruction had to start from scratch.

The American occupation of Japan after 1945 brought in its wake direct exposure to the American way of doing things. After initial hardships brought about by dislocations, shortages and high inflation, the American administration under Genl. MacArthur introduced the “Dodge Plan” which set the ball rolling for the Japanese economic recovery. In addition, Japan served as supply base for American forces involved in the Korean War which began in 1950 when the Communist forces led by Russia and China crossed the 38th parallel into South Korea. For Japan it marked the start of its post-war export boom.

By the mid-1950s Japan’s steep growth path took off. By 1964, when the Olympics came to Tokyo, Japan’s national income was approaching the West European level. Consumers were acquiring the “three sacred treasures” – televisions, washing machines and refrigerators. By 1970 they graduated to the “three C’s” – car, colour television and air conditioning. By the 1980s the economy moved into a rapid technological growth phase. The Tokyo Stock Exchange was equal to that of New York and of the world’s ten biggest banks, eight were Japanese. By 1990 Japan’s exports produced such high levels of foreign reserves that Japanese companies began a shopping spree, acquiring foreign assets such as New York skyscrapers, Hollywood film studios and French paintings.

The Japanese spectacular growth was based on several fundamentals: strong industrial-financial combinations (keiretsu), a large and educated workforce, low inflation and a high savings rate. It absorbed as much American and European technology as it could buy and copy (e.g. transistors). It, furthermore, nurtured inherent strengths of the Japanese culture: an incredible work ethic, an intense identification with the employer-firm, a strong sense of national identity, a keen desire to a better life – and a searing memory of the humiliation of their World War II defeat. Also central was Japan’s commitment to exporting its way to growth. International trade became the bedrock of Japanese economic progress. It moved up the product chain: from textiles and simple manufactures to ships and steel to complex mechanical goods, electronics and high technology. Japan’s growth pattern became the template for its East Asian neighbours.

Industrial Development and Export Promotion

In the early stages of the economic development of Japan, Taiwan and South Korea, their economies were characterised by limited natural resources, an over-supply of unskilled labour and a shortage of capital. The private sector was weak and the government played an active role in planning and encouraging industrial development. Many measures were taken and most of them were restrictive or protective in nature or took the form of a subsidy (such as tax reductions, tax exemptions and the provision of finance at lower interest rates) for the setting up of specific industries or the manufacture of specific products. The protective measures took the form of high tariff and non-tariff barriers for

188 supporting the growth of infant industries as well as the control of foreign exchange in order to make effective use of this scarce resource for importing the machinery, equipment and raw materials needed for industrial production.

Several policies were implemented and measures were taken to encourage the process of industrialisation: encouragement of investment; export processing zones; science-based industrial parks; infrastructure development; and, an appropriate education system.

A common feature of the economic growth of particularly Japan, Taiwan, Hong Kong, Singapore and South Korea was the export expansion strategies that unleashed the potential productive resources – especially labour. After a period of import substitution based on tax relief, of loans at a low interest rate as well as tariff and non-tariff barriers, Japan, Taiwan, South Korea, Hong Kong and Singapore pursued very active export expansion measures.

Encouraging Savings and Investments

In contrast to less successful countries following policies of low interest rates in spite of facing double- digit inflation, the East Asian countries encouraged their people to save more in order to generate domestic finance for sustained rapid growth. All these countries had allowed their domestic interest rates to rise to a reasonable level in their effort to curb domestic inflation. Consequently, saving through domestic financial institutions was encouraged. At the same time, this sort of realistic high- interest rate policy prevented the wasteful use and misallocation of scarce capital and thus ensured fair returns on their investment projects.

The four “Tigers”, South Korea, Taiwan, Hong Kong and Singapore, took care to promote the development of appropriate financial institutions and financial markets to channel funds from savers and lenders to borrowers and investors by expediting the creation and trading of financial instruments: the foreign exchange market, the money market and the capital market.

High positive rates of return on savings deposits have helped to mobilise savings to financial institutions, but have also made capital markets less attractive. Tax measures were used to encourage savings and investments, e.g. exempting from personal income tax the interest income from savings and fixed-term deposits with maturity terms of two years or more, as well as exempting from corporate income the tax profits that were ploughed back into investment. These inducements resulted in an inflow of voluntary savings into the banking system which provided much needed non- inflationary financing for domestic investment. The investment activities made possible by these non- inflationary sources of finance brought about a rapid increase in productivity.

In Taiwan the rapid increase in real income also enhanced savings. In this way Taiwan was converted into a country whose people had a high propensity to save. In 1952 Taiwan saved 5.5 percent of its national income. By 1963 the percentage stood at 13.2 percent and by 1980 savings in Taiwan climbed to the high level of 35 percent compared with 22 percent in Japan and less than 9 percent in the UK and the USA during the same year.

Foreign investment has been encouraged as a national policy in all four newly industrialised economies. The contribution of foreign investment was not only financial but also extended to better technical know-how, more efficient management, the opportunity to import parts and appliances and excellent marketing contacts for export expansion.

189

Balancing Market Forces and Economic Planning

East Asian economies, at the start of their economic development, faced a series of obstacles: high unemployment, a lack of infrastructure, a high inflation rate, insufficient capital and a lack of entrepreneurial confidence. The private sector was characterised by small, traditional and even unsophisticated enterprises. The governments of these countries had to adopt measures to overcome their difficulties and took a leading role, giving direction to national economic development, while at the same time encouraging the free play of private enterprise.

The nature of economic planning in these countries should not be confused with the rigid systems associated with centrally planned economies. East Asian governments did not invoke the central authority to compel private enterprise to adhere to government guidelines or to meet the targets that the government set. Instead the various governments employed policy measures such as tax reductions or tax exemptions, as well as financial measures to induce private enterprises to develop industries with the potential for growth in terms of comparative advantage in the world market.

With the exception of China which only opened the field for private investment in the 1980s, all the successful economies of East Asia, industries were privately owned rather than belonging to the government, except in naturally monopolistic industries such as public utilities. This choice turned out to have enormous economic significance because the owner-managers were motivated to make profits and hence prepared to put their own equity capital at risk. These owner-managers had the incentive to maximise profits and minimise losses and to pay careful attention to the consequences of what they did or did not do.

A further factor common to the successful economies was the extension of the rule of law to the economic sphere. This implied not only the enforcement of legal contracts but also the guarantee of due process and the reduction of discretionary administrative powers in economic matters. The rule of law enhanced predictability, the security of returns on investments and the curtailment of nepotism. It meant that the rewards of economic success go to the efficient and not to the merely politically powerful or well-connected and certainly not to the government of the day.

This is not to say that the East Asian governments have not been interventionist to the extent that they could attain certain recognised public policy objectives. But on the whole these governments have not directed the activities of private firms. They have allowed owner-managers to run their own firms and retain their taxed profits.

An additional important factor was the presence of competition. Most governments have generally refrained from creating or supporting domestic monopolies. The East Asian governments did not compete with private firms. Moreover, the export orientation of these economies implied that most enterprises had to compete internationally and to comply with the discipline imposed by the competitive world market. Private enterprise provided the profit motive, but competition and the rule of law ensured the efficient allocation of resources.

In Japan and Taiwan the government has allowed companies to run their enterprises as they see fit within the bounds government policies have set. In sluggish India, on the other hand, the government tended to control everything and ended up serving only the vested interests of a handful of big businesses.

190

Upward Mobility and Political Stability

In Japan, South Korea, Taiwan and also China (under Deng Xiaoping), comprehensive land reform policies were introduced to redistribute land. Under Taiwan’s “land-to-the-tiller” policy, which was implemented in the 1950s, large tracts of land originally owned by a small number of landlords were expropriated with government compensation and transferred to the tillers. These tenant farmers were then placed in a position to manage their own farming operations after obtaining ownership of the land. The former landlords, in turn, invested their money in industries, thus creating job opportunities for the under-employed surplus labour force on the farms. The successful implementation of land reform solidified Taiwan’s agricultural development and helped stabilise social and political conditions. This achievement also provided a favourable climate for the development of the industrial sector.

An obvious benefit of the successful and trendsetting East Asian countries lies in their political stability. Over the period 1960-1990 there was not a single instance of the transfer of power to another political group in any of the relatively successful economies. China remained a communist dictatorship. Singapore, Indonesia and Malaysia have each been governed by the same authoritarian party since achieving independence. South Korea and Taiwan have begun to liberalise, but continued to be ruled by the same “old guard” elite structure. Hong Kong remained a colony of the United Kingdom until 1998. Indonesia’s President for almost two decades, Suharto, had military backing. In Thailand the generals and civilian politicians took turns in forming the government while a small group of technocrats appeared to be running the country. This elite group’s first loyalty was to the king who gave stability to the nation. Although the political lessons learned from the emerging economic powers of East Asia may be unacceptable to persons with a strong liberal-democratic persuasion (such as the author of this essay), such examples do not necessarily mean that economic growth requires authoritarian government per se. Emerging Asia’s governments have been economically enlightened. They have not flinched from taking tough measures to maintain macro-economic stability and have ensured that economic policies are predictable and transparent.

An additional feature of the prevalent style of government in South-East Asia is the predominant roles played by small cliques of unelected technocrats. These bureaucratic elites are found in the feudal fiefdoms of the Japanese manufacturing industry (“samurai” and “keiretsu”); in the corridors of Japan’s Ministry of International Trade and Industry (MITI); the modernising Chinese Mandarin elite officials who are occupying key positions in Taiwan, Hong Kong and Singapore; the so-called “Berkeley Mafia”, a group of hand-picked technocrats who have won accolades for their role in assisting Indonesia’s growth in the 1980s.

Education, Training and Technology

The single biggest source of comparative advantage for the successful East Asian countries lies in their well-educated and well-trained workforce. Confucius wrote: “If you plan for a year, plant a seed. If you plan for ten years, plant a tree. If for a hundred years, teach the people.” This advice has not been forgotten in East Asia. Japan introduced compulsory elementary education in 1872 and became one of the world’s most education-conscious societies. This obsession rubbed off on Japan’s former colonies, Korea and Taiwan.

In recent years a Korean teenager is said to be more likely to go to university than his Japanese peer. Korea spent 5 percent of its GDP on research and development in 2000. Taiwan’s achievement is

191 equally impressive. It has 42 universities and 75 polytechnics turning out 40,000 engineers and 140,000 technicians each year. In 1980 one in every four candidates for doctorates in electrical engineering at American universities came from Taiwan. Taiwanese companies were becoming leaders in many fields of technology.

From the experience of these countries it is clear that sound education (not education per se) improves the quality of the labour force and enables its members to accept advanced and sophisticated productive technology. It also induces innovation and invention which, in turn, enhance efficiency and productivity.

An Effective Entrepreneurship Culture

East Asia’s miracle economies have been driven by Asian enterprises which flourished long before the arrival of the Europeans, and had skilled craftsmanship, trade and commerce forming an integral part of traditional life. The growth impetus came from entrepreneurially driven enterprises: some large, many medium-sized and a multitude of smaller ones. Most were family owned.

By 1990, Japan had 6.5 million businesses in operation. Only 46,000 of these could loosely be described as large corporations; the rest were small to medium-sized firms. The vast majority of these – some 5.6 million firms in all – were active in the services and tertiary fields. The remaining 900,000 have traditionally been the loyal burden-sharers for Japanese manufacturing as component suppliers to big manufacturers. These suppliers usually have to absorb the unemployment costs when big firms take back their sub-contracted work and do it in-house during economic downturns. Subsequently, the proportion of small firms earning their living by sub-contracting declined to less than 50 percent of the total. Many became market producers in their own right, while others captured new markets – at home and abroad.

All the emerging countries of East Asia were found to show certain values and structural characteristics embedded in their societies which facilitated and enhanced entrepreneurship in both of its main features: the initiating as well as the co-ordinating side of business operations. East Asian society accords high status to businessmen and has developed systems which facilitate innovation and support such values as risk taking, achievement, wealth creation, co-operation, trust and professionalism. In addition, favourable “structural factors” were present such as a sound education system that provided training in professional and managerial skills.

The Chinese family business, dominated by the pater familias and internally owned and controlled, plays a prominent entrepreneurial role in most East Asian economies. The so-called “Overseas Chinese” form a highly significant economic group, particularly in Taiwan, Hong Kong and Singapore. Various research projects have found that the “Overseas Chinese” maintain value systems and kinship structures which facilitate the initiating facet of entrepreneurship. The heavy reliance on family as the principal unit of society means that survival and security are closely allied to the success of the family business. The capital resources required are provided by the savings of family members – or by private loan associations consisting of relatives and friends. Family members share all kinds of jobs – from manual labour to management. Their willingness to work hard has kept labour costs low and created a favourable environment for business enterprise. This environment has inevitably spawned a large number of small firms, further spurred on by values which support being an owner rather than an employee. On the negative side it was found that the family business context poses barriers to the higher levels of co-ordination necessary for the growth of large firms, as the forces of family control struggle with those of neutral professionalism.

192

As regards the other indigenous population groups in South-East Asia, it was found that prevailing socio-cultural values have been barriers to the crucial first entrepreneurial stage of initiating. Their values about going into business appeared to be less encouraging than in the Chinese case. Family survival is less sharply an issue in people’s perceptions. Risk-taking is less of a norm and conservative orientations are likely to dampen innovative tendencies. The structural factors, particularly the labyrinthine bureaucratic controls, but also the misguided economic policies and financing systems are less conducive to either initiation or co-ordination activities, than they are in the cases of Japan, Taiwan, South Korea, Hong Kong and Singapore. These findings suggest a close association between the emergence of entrepreneurship activity and the economic milieu created by government policy and by the socio-cultural features of society.

Effective Business Networking

Another noteworthy characteristic of the successful East Asian economies is their highly effective system of business networking – the best example of which is found in Japan. Giant parent companies like Matsushita, Toshiba, Hitachi, Sony, Fujitsu and others, stand at the apex of their vertically integrated business pyramids. Each manufacturing “family” of dozens of related companies includes some that are large and powerful in their own right, many of which are listed on the stock exchange. Some family members may be related in a “federal” loose-jointed structure, while others are more centralised and “unitary” in structure. Beneath the parent company’s directly associated corporate “family”, are the trusted retainers – primary sub-contractors – beneath which lie further layers of sub- contractors. Rather than doing their own designing and manufacturing, the various parent companies called “dai kigyo” or “keiretsu”, co-ordinate a complex design and manufacturing process that involves thousands of medium-sized and small companies.

Small to medium-sized companies called “chu-sho kigyo” make up the bulk of Japanese industry and form the real foundation of the Japanese economy. In 1990, it was estimated that over 75 percent of Japanese companies were capitalised below US$70,000 and only 1 percent of all companies in Japan were capitalised over US$700,000. Small and medium-sized enterprises employed over 80 percent of the national workforce.

But even the giant “keiretsu” manufacturing companies are integrated into large industrial groups. Some are part of the influential pre-war Big-Six “Zaihatsu” cliques, e.g. Mitsui (Nimoku club) Mitsubishi (Kinyo club), Sumitomo (Hakusui club), Fuji (Fuyo club), Sanwa (Sansui club) and Dai-Ichi Kangyo (Sanzen club). Others include non-zaihatsu groups formed around top manufacturing companies, banks and retailing conglomerates such as Toyota, Hitachi, Matsushita, Seibu and Tokyo. In step with the era of high technology, industrial groups are pursuing greater co-operation among member firms, organising joint research and development projects in the fields of new materials development, biotechnology, electronics and data communications.

Japan’s awe-inspiring trade structure hinges on large trading groups called “sogo shosha”. These general trading companies differ from other giant groups in that they are not necessarily involved in the manufacturing field. Instead, they tend to be orientated to both the supply and demand side and to function as problem solvers. Their investments are directly connected with trade and international business. The most prominent among them are the Mitsubishi Corporation, Mitsui & Co., Itoh & Co., the Sumitomo Corporation, the Marubemi Corporation, Nissho Iwai, Toyo Menka Kaisha, Kanematsu- Gosho and Nichimen. With the support of MITI they became the advance guard of Japan’s export drive. Since they played such a crucial role in exports and imports, they served as catalysts for Japan’s rapid

193 economic growth. They have vast communication networks spanning the world, collecting and transmitting data on day-to-day commodity price fluctuations, markets, areas of surplus and shortage. They are involved worldwide in virtually any kind of commercial transaction, reaching far beyond trade itself into resource development, finance and the organisation of industrial projects and they offer clients a wide range of services: information, expertise, insurance, shipping, etc. Their total annual sales amount to over 40 percent of Japan’s GNP.

Japan’s business world is well organised to protect and further its interests in the government, political circles, organised labour, the media and foreign countries. There are four core organisations that function as vehicles for this rather intangible business activity: the Federation of Economic Organisations (“Keidenren”), the Japan Committee for Economic Development (“Keizai Doyukai”), the Japan Federation of Employer’s Associations (“Nikkeiren”) and the Japan Chamber of Commerce and Industry (“Nissho”). Officials who serve in these organisations are top executives from leading corporations and many have posts in two or more organisations.

By far the most important and powerful is the “Keidenren”, a conglomeration of big businesses. Its chairman is often referred to as the “Prime Minister of Commerce”. Doyukai is an assembly of individual business leaders and the organisation’s influence is more of an ideological or theoretical nature. “Nikkeiren” is broadly speaking an employer’s bastion and its activities are primarily targeted at the affairs of labour. The Japan Chamber of Commerce and Industry (“Nissho”) is an organisation for small and medium-sized enterprises.

The “Keidenren” is a nationwide body with 117 associates and numerous corporate members. Its central organ of decision-making is the board of directors with 470 members. It performs the following roles: - It consolidates and co-ordinates the views of the business community and conveys these views to government and political parties; - It responds to official and unofficial requests for counselling and recommendations; - It provides representation on advisory organs of government enabling it to participate in the formulation of national economic policies and strategies; - It maintains close day-to-day communications and contacts with officials of various government agencies involved in economic matters; - It maintains bilateral contact with countries of particular economic importance to Japanese business.

Work Ethic and Non-Disruptive Labour Relations

In societies where work is regarded as a virtue and idleness as a vice, it is hardly surprising that the successful East and South-East Asian economies have managed to maintain non-disruptive labour relations and continuous economic expansion. The Confucian philosophy coupled with Chinese and Japanese tradition appears to have engendered the ability and willingness to work hard and for long hours. The general labour relations picture is characterised by the employee’s strong sense of affiliation to the enterprise, weak umbrella unions and the dual structure of labour-management relations.

The strong sense of affiliation to one’s employer apparently arose out of Japan’s feudal history and traditional social structures. The greatest assurance of economic security for a worker lay in gaining a permanent attachment to the employer company. This pattern led to the emergence of “enterprise” unions, consisting of workers in a given company or plant. These unions cross trade or skill barriers and usually include both blue-collar and white-collar employees. Any antagonism between

194 management and workers is perceived as a family or household dispute. A specific time in each year is set aside for handling disputes between labour and management and this is known as “Shunto”. This annual bargaining for higher wages each spring began under the leadership of Sohyo (The General Council of Trade Unions of Japan) to support the position of “enterprise” unions and the largely non- unionised sector – Japan’s myriad small and medium-sized enterprises.

Post-war industrial peace in Japan as well as in other East Asian countries has been based on the realisation that the enterprise encompasses a community of people – both management and other employees – who are bound together by a common destiny. Employees in major Japanese corporations form the heart of the corporations in the same way as shareholders form the nexus in US/European corporations. In addition, the Japanese worker has become both producer and supervisor on the shop floor, taking a subjective interest in improving production methods in the certain knowledge that the company has to survive in Japan’s highly productive and competitive environment.

Recognising satisfactory performance is another way of linking the fortunes of the worker to those of the enterprise. Japan has a bonus system to give bonuses annually to all employees, depending on how the company has fared. This system forms part of the catalyst that makes Japanese enterprises highly efficient in marshalling their employees’ energies.

Also in the case of the high-performance economies that share the Chinese culture, enterprises adhere to the Chinese tradition in labour relations which promises both flexibility and employment stability. Bonuses are paid to workers at major festivals and at the end of the year. Relations between employer and employee are more permanent than in the West, and employers are subject to moral pressure to take proper care of their workers.

Integration of Traditional and Modern Management Styles

The high-performance economies of East Asia followed in Japan’s wake in combining traditional Pacific values and styles with modernised management methods, styles and systems. For several decades the Japanese have been perceived as having perfected the art of taking apart other people’s products and working out how to make them almost as well and a great deal more cheaply. They have eventually shed the image of merely being good imitators and have entered a new era of creativity in management and industry.

For many years there has been a heavy reliance in Japan on Western, particularly American management approaches and methods. Of particular relevance was Fredrich Taylor’s “scientific management”, Demmig and Juran’s “quality control”, Larry Miles’ “value management”, Chester Barnard’s “managerial co-ordination”, and Peter Drucker’s “management by objectives” (MBO). Although these approaches and systems have been superseded in the Western business schools by other fashionable concepts and fads, the original theories not only thrive in East Asia but are adapted and developed into company-wide systems. One such example is “value engineering” (VE) methodology with the aim of improving product and service functions, reducing costs and improving work efficiency. The technique analyses product function improvements through creative techniques such as brainstorming and the analysis of alternatives. Other examples are Quality Assurance (QA) to assure product reliability and Technology Promotion (TP) focussing on product promotion, and on rewards for invention and creative ideas. Japan’s success in adapting and extending Western management techniques is akin to its successful exploitation of Western electronic technology.

195

Low Dependency Ratios

A favourable demographic that underpinned much of East and South-East Asia’s economic success has been the favourable ratio of the region’s dependants – those 15 and under or 65 and over – to its working age population. Since the 1970s it has fallen from 80 percent (including many children) to 55 percent. For the whole region it is estimated to reach its lowest point around 2015 at 49 percent.

This demographic dividend (which excludes Japan) means a swelling cohort of working-age persons – comparable to the USA’s post World War II baby-boomers - is having a favourable effect on consumption patterns and expectations of a better future which includes more discretionary spending. It is also argued that the region’s unofficial or black-market economy is as much as 50 percent of the size of the official economy (Japan excluded).This part of the economy could act as a better engine of higher consumption because it is not taxed at source.

As high savings fuelled much of the initial export-driven growth of the region, the new generation of baby-boomers could lead to a virtuous cycle of higher domestic consumption. It would encourage a more balanced development pattern, less dependent upon exports to unreliable foreign markets and more dependent on local consumption.

Reconstruction Singapore Style

Singapore is an island city-state with a population of around 5 million. Chinese make up 77 percent, Malays 14 percent and Indians 8 percent. For over 140 years of British rule until 1959, it had a simple entrepot trade economy with little agriculture and no industry. Since 1959 it strived towards inter- racial harmony by ensuring minority representation through group representation constituencies. This required a group of four candidates for each constituency to include at least one candidate from a minority racial group.

Fifty years ago Singapore suffered from political unrest, high unemployment and low economic growth. They faced communist insurrection with strikes, go-slows, arson, political assassinations and general disorder. When Mr. Lee Kuan Yew took over as Prime Minister in 1959 the establishment of stability became his first priority. Thereafter came the restoration of social and work discipline. These objectives took several years to achieve, but it paved the way for economic growth through investments and trade.

Tripartite bodies of labour, management and government were established in a National Wages Council to set guidelines on what wages the economy could bear in each sector. Similarly a National Productivity Board was established to get workers and management to learn productivity techniques and to build up the spirit of co-operation so that productivity could improve and so increase profits and wages.

Once economic growth had got going in earnest, they launched a massive programme of public housing to give each worker a stake in the country’s progress. A compulsory savings scheme was built up to which employers and employees each initially contributed 5 percent of wages. Today this contribution has increased to 20 percent of wages from each side. Workers can borrow from this fund to buy their own homes and still have a retirement nest egg. Today nearly 90 percent of workers own their own homes bought with these savings.

196

Singapore’s key strengths in fostering growth have been meritocracy and an efficient and corruption- free administration. A person earns his place on merit. Race, status, connections or influence of his parents or friends are conscientiously screened off or filtered out. This was not easy because in the aftermath of victory in the post-independence elections, party stalwarts wanted to be rewarded for their loyalty and support – the only virtues they believed merited recognition. Meritocracy had little appeal to the previously disadvantaged. But the leaders had to moderate and temper expectations. They had to resolve the contradictions between the aspirations of their supporters and the realities of the economy. They were faced with enormous pressures to redress the inequities of past distribution of opportunities and wealth. According to Lee Kuan Yew the art of government in such a situation is to redress grievances in a manner and at a pace that does not turn the rules of economics upside down.

The Experience of the Laggards

The post-war experiences of the economic laggards of East Asia provide ample evidence of the factors that block development: internecine strife; ideological authoritarianism; unaccountable military-based domination; chronic nepotism, corruption and chronyism driven by secretive patronage networks in the hands of a kleptocracy of oligarchs; and, the concentration of power in the hands of megalomaniac leaders. There are several examples of the destruction of warfare and of utterly bad and corrupt systems that stood or are still standing in the way of development in East Asia:

- the rule by North Korea’s Communism under the leadership of three generations of megalomaniacs; - the cruel legacy of Pol Pot’s Khmer Rouge who killed close to a third of the population of Cambodia during the period 1975 to 1979; - the obscurantist Communist government clique in Laos working incessantly to isolate the country from any external influences except those seeping in from the Yunnan province in China; - the destruction of the American-led war in Vietnam (1963-1974) that imposed a heavy price on the 90 million Vietnamese who are now trying to build a better future by opening up their devastated country to foreign trade and investment; - the rebirth of a vibrant Buddhist society in Myanmar after 50 years of isolationist military rule; - the mobilisation of a population of almost 100 million Philippinos speaking 87 indigenous languages into the modern age after many years of economic stagnation first under Spanish then American colonial government, Japanese occupation during the Second World War, the corrupt post-war Marcos government which was followed by a succession of bad governments led by Aquino, then Ramos, then Estrada, then Arroyo and then Aquino III in a country that remains desperately poor with its GDP per capita at $2570 per annum.

The Experience of the Aspirationals

On a slightly more promising level of progress are Malaysia (population 29 million), Indonesia (population 250 million) and Thailand (population 70 million).

The Malaysian success story involves a significant scale of economic development under Dr Mahathir while keeping its multi-racial and multi-religious community reasonably stable. It serves as a good role model for many other Islamic countries where economic failure and government heavy-handedness is predominant. Friction between the majority Malay and minority Chinese and Indian communities has remained a source of tension for many decades. Political parties and community life are divided along racial lines with the 24 percent Chinese population in control of large parts of the business life. The New Economic Policy (NEP) introduced in 1971 to advance the Bumiputra (Malay) commercial and industrial interests involved an elaborate “positive discrimination” programme in favour of Malays

197 and other minority groups. Government jobs and contracts were reserved for Malays. Special quotas in university admissions were reserved for Malays and a certain proportion of housing and commercial property were discounted in favour of Malays. Public quoted companies were forced to ensure that at least 30 percent of their shares were held by Malays. The public school system was “Malayised”. These policies had a big impact in that Chinese business expansion was severely curtailed. On the Malay side it led to expensive misallocation of capital as well as to a heavy infusion of crony capitalism.

Despite the downside of the NEP, Chinese Malaysian businessmen considered themselves still better off than in neighbouring Thailand or Indonesia where they were compelled to assimilate completely in order to survive. In Thailand they were forced to take Thai names and in Indonesia they experienced repeated progroms. The Chinese Malaysians feel that the NEP made them more resilient, more competitive and tough. Many found refuge in neighbouring Singapore.

After Asia’s economic crisis of 1997-1998, Malaysia’s Prime Minister, Dr Mahathir moved swiftly to introduce reform measures. The non-performing loans of banks were restructured, “anchor” banks were set up, business management practices were modernised and cronyism and corruption curtailed. Capital and exchange controls were imposed and the value of the Malaysian ringgit was pegged to the US dollar. Malaysia became a favourite destination for foreign investment. Mahathir has done much to foster racial harmony but the NEP’s Bumiputra favouritism has fostered a proclivity for rent-seeking and a culture of cronyism. Mahathir’s government has also been accused of favouring former government officials in business deals and a coterie of Malay tycoons through lucrative government concessions. The Indian minority do not enjoy the same privileges as the Bumiputra. Too many Malays have become accustomed to leg-ups and hand-outs. Malaysia is nevertheless an economic success story – albeit not a shining example of how a democracy should function. It is clear that the system of racial preferences should be abolished altogether.

Indonesia comprises some 17,500 islands and its huge population of 250 million is divided into around 300 ethnic groups speaking close to a similar number of languages. The predominant religion is Islam which was brought into these islands by Indian traders. European influence began in the 16th century when the Dutch East India Company established a major trading post in Java. It held control until the Japanese invasion in 1942. After ten years of dictatorship under Sukarno and more than three decades of iron rule by Suharto, Indonesia’s Reformasi began in 1998 with a new elected national parliament. The first elected president, Wahid, was forced out of office by Megawati Sukarnoputri in 2001. She was replaced by former general Yudhyono in 2003 after an election. The Indonesian armed forces still cast a long shadow over the country’s political life with an elective veto over important decisions, a large budget and a command structure that parallels the civilian government down to the village level. With its experiment in constitutional democracy still in progress, Indonesia has some way to go to stamp out corruption, to deliver more employment opportunities and to get a handle on its own internal terrorist threats. Its economic life appears to be dominated by its Indonesian-Chinese business community and by an over-dependence on its abundance of natural resources.

Thailand, formerly Siam, has a population of around 70 million (75 percent Thai, 14 percent Chinese and the rest Khmer and Mon minorities with Muslims in the south) is a Buddhist country that avoided colonial control by yielding Lao and Cambodian territory to France and Malay territories to Britain. Today it is governed as a ceremonial monarchy under a constitutional regime. After a period of post- war instability, the army seized control in 1947 and remained in effective control until the 1990s under a succession of generals. Subsequently the country was governed by a succession of coalitions orchestrated by army generals and business tycoons (such as Thaksin Shinawatra). This regime is

198 characterised by nepotism, corruption and what the locals call “money politics”. Much turmoil is caused by pervasive drug smuggling, the Muslim militancy on the border with Malaysia and the allegations of corruption particularly against Shinawatra family’s Shin Corporation. Thaksin was forced to step aside as Prime Minister and was later succeeded by his sister when Thaksin was banned from involvement in politics for five years and indicted on corruption charges. Close to 50 percent of GDP is produced by industry with agriculture comprising around 10 percent and services (including tourism) 40 percent. About 30 percent of the land is arable and another 30 percent covered by forests. The political turmoil seems to weigh heavily on the economic outlook despite the considerable potential of the economy.

The Financial Crisis of 1997-98

The financial crisis of 1997-98 had a severe impact on all East Asian countries. After Thailand devalued the baht on July 2nd, 1997, capital rushed out of the region’s banks and in rapid succession several economies collapsed. The resulting panic soon spread and for a while posed a serious threat to the world economy. The currencies of Thailand, South Korea, Malaysia and the Philippines were the hardest hit and went down by 40 to 60 percent. Stock markets lost 75 percent and real GDP shrunk throughout the region. Millions lost their jobs.

The IMF which failed to foresee the crisis, tried to assist the recovery. It insisted, though controversially, on tight fiscal and monetary policies as well as a swift disposal of bad debts and so rescued the financial system. Foreign debt was either repaid or rescheduled and currencies were allowed to float. Long-standing governments fell in South Korea, Taiwan, Indonesia, Thailand and the Philippines. Gradually domestic consumption picked up significantly.

South Korea led the way to recovery. Its banks were recapitalised by the government and a public asset-management company was set up to buy up bad loans. This freed up the banks to get on with business with fresh capital and healthier loan portfolios which were less subject to political pressures. The governments retrieved their bail-out related debts by reselling equity it held in the banks. Financial regulators laid down new guidelines for managing credit risks, including independent credit committees insulated from outside meddling. A new emphasis was laid on lending to small and medium-sized firms as well as on consumer lending. In essence, the financial system that triggered the crash was properly reformed.

After unseating long-standing dictator Suharto, the Indonesian government took over most of the banking sector’s non-performing loans but also moved in to nationalise many parts of the economy. As a result foreign investment declined significantly.

On balance, it could be claimed that the IMF’s bail-out efforts helped to restore confidence and to reform and restructure the banking system. Gradually income per head returned to pre-crisis levels throughout the region.

Japan’s Regression Since 1995

For much of the post-war decades Japan served as a role model for its East Asian neighbours. Many leaders in other countries looked with envy at Japan’s spectacular post-war growth coupled with social wellbeing. The Japanese success formula seemed to encompass a strong export capacity based

199 on low wages, high savings rates to finance investments as well as strong government support for the private business sector.

Japan’s GDP increased at a rate of 10 percent per annum during the 1960-1970 period. By 1974 it had declined to 4 percent and by 1995 it was only around 1 percent as the good times came to an end. The country was on the verge of financial and economic meltdown with a political establishment apparently incapable of facing the crisis.

Prices started to fall in 1995 and in 1997 nominal GDP shrunk by 6 percent. The Governor of the Bank of Japan officially confirmed that the country faced a problem of deflation for which macro-economic textbooks offered no solution. Interest rates were already brought down to zero rendering monetary policy impotent. The chances of employing fiscal easing measures were reduced because of public sector debt being already dangerously high. The option to halt deflation by printing money being diminished by the fact that the BOJ was already creating heaps of money by buying lots of government bonds. The money transmission mechanism was blocked because banks, being saddled with heaps of bad debts, could not lend more unless the banking system was properly fixed first.

Part of the bad debt problem was caused by the fall of 80 percent in the value of land prices in the decade following 1990. This meant that the banks had undervalued the portion of loans on their books without collateral – i.e. the portion against which they needed to hold reserves. As a result the Financial Services Agency (FSA), the industry watchdog, found it difficult to monitor the true scope of problem loans which remained as unrealised losses in the loan portfolios of the banks.

The Japanese economic model came in for heavy criticism for a variety of justifiable reasons. Many countries complained that Japan unfairly took what it could from the world economy – especially profits from export markets and the absorption of Western technology and “know-how” – but without giving much in return. Japan had consistently resisted inward investment by other countries, while at the same time Japanese businesses were combing the rest of the world for lucrative investment opportunities. Japan also protected its own farm production behind the facade of protectionist phyto- sanitary requirements and other bureaucratic impediments on imports.

Japanese politicians and bureaucrats prevented any form of competition to drive change. In this way they protected themselves from competition so that the dominant Liberal Democratic Party (LDP) remained in power for decades. This protective set-up was further buttressed by their socialised financial system. The tax code, the public works budget and hidden subsidies played a key role in maintaining the status quo: propping up moribund firms with the banks carrying large volumes of non- performing loans on their books. The economic system functioned as an extension of the welfare system with the bureaucrats operating as a protected “special interest” group. Retired bureaucrats usually end up working for the companies they used to regulate. Inevitably this practice led to “conflicts of interest”, nepotism and corruption.

In Japan inordinate power is vested in its much-revered conglomerates. These firms include unwieldy and inefficient giants such as Hitachi, Fujitsu and many others. Within their frameworks lifetime employment is practised unofficially with the result that the true level of unemployment is not reflected in official figures.

Prime Minister Koizumi came to office in 2001 and succeeded to force the banks and the conglomerates to clear up their piles of bad debts. By 2002 the economy started to grow again, but in

200

2006 Koizumi retired as a result of unbearable pressures. His successor, Shinzo Abe, also soon retired under nervous and physical strain.

Japan’s economic prospects are darkened by demographics as its population is greying faster than any other big economy. No less than 20 percent of Japanese are over 65 and it is estimated that by 2015 the proportion will grow to 25 percent – about 30 million. Birth rates are below replacement at 1.32 with virtually no immigration expected. The population of around 127 million is expected to drop below 100 million over the next half century. Today the Japanese economy is steadily ploughing ahead, again under the baton of Shinzo Abe. He has put Japan under a regime of “Abenomics” which boils down to a mix of reflation, government spending and a growth strategy designed to jolt the economy out of its suspended animation that has gripped it for more than two decades. With his own health revived, Abe hopes to inject new life into the political establishment, the bureaucracy and the economy at large. Since he was elected the stock market has risen by more than 50 percent and consumer spending has pushed up the economic growth in the first quarter of 2013 to an annualised rate of 3.5 percent.

But pulling Japan out of its slump will prove to be a huge task. After two decades the country’s nominal GDP is the same as in 1991. Japan’s shrinking workforce is burdened by the cost of its growing proportion of elderly. The new governor of the BOJ has promised to pump ever more money into the economy. The government has already announced extra spending worth trillions of yen. Printing more money can only have limited results in view of the existing gross debt level of 240 percent of GDP. Although, fortunately, this huge debt pile is domestically held, Abe will have to find structural supply- side solutions. He has referred to his monetary, fiscal and growth strategies as his “three arrows”. With the strong political support he commands, Mr Abe has a real chance to reinvigorate the Japanese economy.

Concluding Remarks

In view of the success stories of several countries, it is still an open-ended question why some countries have seriously lagged behind. A clue to the answer may be found in the earlier history of the successful countries. Success only came after many disappointing years of wrong-headed policies, dysfunctional government structures and misdirected leadership.

Japan went through generations of misdirected shogun controlled military domination which led to atrocious foreign escapades that ultimately led to aggressive campaigns in the Pacific region. It ultimately led to Japan’s destruction and defeat in the Second World War.

China suffered misdirected Communist Party government from 1948 to 1976 when Mao Zedong mercifully passed away. The Chinese people were initially subjected to prolonged hardship by years of armed civil strife between Kuomintang and Communist forces and then suffered unspeakable misery by Mao Zedong’s ill-conceived fanciful policies and campaigns: the “Hundred Flowers Campaign” that decimated China’s leadership cadres, the “Great Leap Forward” that brought starvation to millions and the “Cultural Revolution” that aimed at purging Chinese society of any form of independent thinking. Fortunately common sense was restored under Deng Xiaoping.

South Korea was devastated by its war experience with close to ten percent of its population killed and two-thirds of its industrial capacity destroyed. Then it endured many years of corrupt military government. Only after two former presidents were arrested – one jailed for 20 years and the other

201 sentenced to death did society manage to loosen the grips of its powerful chaebols and political cliques. Even today, its reforming and restructuring process is an ongoing challenge.

Taiwan with its 20 million people experienced a success story that surpassed those of Japan and South Korea. The Taiwan experience can be looked upon as a monument to “Confucian capitalism”. After a process of serious soul-searching, the Taiwanese Chinese under Chiang Kai-shek methodically identified the social pathology they wanted to avoid: corruption, inequality, arbitrary government power, hyperinflation and obstacles to modern science and technology. On the positive side, they aimed at creating an anti-corruption ethos, an environment for entrepreneurs to flourish, a depoliticised market system, good infrastructure and the promotion of the export of manufactured goods. The government supported export industries through low-cost loans, enterprise zones and the aggressive sourcing of technology. They also relied heavily on overseas and foreign educated Chinese who acted as a “brain bank” and an effective network for technology transfer. Taiwan also had the good fortune of roping in the inputs of a few foreign-trained “super-technocrats” who acted like Confucian advisors over several decades. They adopted the Japanese policy formula of “competing out and protecting in”. They also insisted that Taiwan should aim to continue its transition from authoritarianism to democratic rule and the broadening of the middle class. The Kuomintang established by Chiang Kai-shek kept a tight grip on power for several decades and defined Taiwan’s relationships with the People’s Republic of mainland China. Taiwan has remained by far the biggest investor in mainland China despite the fact that the People’s Republic regards Taiwan as an errant province that needs to be gathered. The Taiwanese insist that their distinct status ought to be respected and that the gulf between them would be narrowed as China’s brand of socialism takes on the character of “Confucian capitalism”.

The impressive growth experience of the East Asian “tigers” is indeed remarkable. Free marketeers point to their reliance on private enterprise, markets and trade liberalisation. Interventionists point with equal assertiveness to the non-market and centrally planned allocation of resources, the role of clever bureaucratic elites and regulated trade regimes. The truth of the matter is more complex and requires the careful balancing of a multitude of factors including the cultural traits of the societies involved. Regard must also be had to the interaction of these economies with the lucrative markets in the USA and Europe.

The wellsprings of growth and transformation in the East Asia regions included several forces: sound macroeconomic policies in the sense of getting the fundamentals right, incentives for increasing productivity, openness to foreign ideas and technology, an export orientation and the development of human resources. A positive government role was crucial. It was buttressed by a leadership committed to a broad-based development and an efficient bureaucracy. Government did not act as planner or owner of industrial enterprise, but as a guide and facilitator, developing infrastructure and a framework for effective policy implementation, encouraging the accumulation of physical and human capital and allocating it to productive activities. International competitiveness was recognised as the ultimate aim and it relied on the private sector as the engine of growth. The east Asian success stories demonstrated that targeted and controlled government activity can be beneficial to the common good – government involvement is not bad per se. It depends upon the quality of the interventions.

The East Asian achievement is all the more remarkable when one considers that several countries (e.g. Japan, Singapore and Hong Kong) were not endowed with rich natural resources. Several were faced with population pressures. In addition, the successful governments did not place development under

202 any particular ideological umbrella. They were pragmatic in the sense of placing emphasis on what worked in terms of evidence-based policymaking.

The spectacular success of Japan and its neighbours in East Asia, illustrates that modernisation can proceed without fully-fledged cultural westernisation. Western countries put a high value on individualism, freedom of expression, competitive politics, loose-jointed societies and diverse lifestyles. East Asians prefer easternisation, like working in groups, are more disciplinarian and conformist, socially conservative, hierarchical and accept interventionist power. But certain cultural factors undoubtedly did play an important role: attitudes to work, discipline, family loyalty, thrift, inherent commercial instinct and the high value placed on learning and education.

After the 1997-98 financial crisis, and more recently, the Global Financial Crisis of 2008, many foreign observers were quick to pronounce the end of the East Asian “miracle”. But they were wrong. The East Asian economies bounced back, partly because of the markets in the West, but also because the tiger- countries still had the inherent impetus to continue their growth-path: high savings to finance investment, high productivity and social stability. These are likely to help East Asia to remain the fastest growing region in the world in the foreseeable future. If a bigger share of those gains go to workers and consumers, the next growth phase could have additional benefits for its more than 2 billion people.

203

Bibliography

Balassa, B. (1981) The Newly Industrializing Countries in the World Economy, New York: Pergamon Chen, E.K. (1979) Hypergrowth in Asian Economies, London: Macmillan Estévez-Abe, Margarita Welfare and Capitalism in Postwar Japan: Party, (2008) Bureaucracy and Business, New York: Cambridge University Press Håfheinz, R. & Calder, E.K. The East Asia Edge, New York: Basic Books (1982) Hou, C. (1988) “Strategy for economic development in Taiwan and Implications for developing economies”, Conference on Economic Development Experiences of Taiwan and its New Role in Emerging Asia-Pacific Area, Taipei, Institute of Economics, pp.31-57 Imai, K. (1980) “Japan’s industrial organization”, in K. Sato, Ed., Industry and Business in Japan, London: Croom Helm Johnson, C. (1982) MITI and the Japanese Miracle, Stanford: Stanford University Press Lau, L.J. & Klein L.R. (1990) Models of Development, San Francisco: ICS Press Lee, Y. & Tsai, T. (1988) “Development of financial systems and monetary policies in Taiwan”, Conference on Economic Development Experiences, op.cit., pp.205-25 Li, K.T. (1989) “Sources of rapid economic growth: the case of Taiwan”, Journal of Economic Growth, pp.4-12 Liu, P.K.C. & San, G (1988) “Social and institutional basis for economic development in Taiwan, Conference on Economic Development Experiences, op.cit., pp.257-775 Mahbubani, Kishore (2007) The New Asian Hemisphere – The Irresistible Gift of Power to the East, New York: Public Affairs Redding, S.G. & Wong, G.Y.Y. “The psychology of Chinese organizational behaviour”, (1986) in M.H. Bond, Ed., The Psychology of the Chinese People, Hong Kong: Oxford University Press Sakai, K. (1990) “The feudal world of Japanese manufacturing”, Harvard Business Review, November-December, pp.38-49 Scitovsky (1990) “Economic development in Taiwan and South Korea”, in Lau & Klein, Models of Development, San Francisco: ICS Press Tsiang, S.C. (1988) “In search of a growth theory that would fit our own conditions”, Conference on Economic Development Experiences of Taiwan and its New Role in an Emerging Asia-Pacific Area, Taipei, Institute of Economics, pp.11-28 Wade, R. (1991) Governing the Market: Economic Theory and the Role of Government in East Asian Industrialisation, Princeton: Princeton University Press Yu, T. (1988) “The role of the government in industrialisation”, Conference on Economic Development Experiences of Taiwan and its new role in an Emerging Asia-Pacific area, Taipei, Institute of Economics, pp.121-159 The World Bank (1993) The East Asian Miracle, New York: Oxford University Press Special Report (1991) “The growing power of Asia”, Fortune International, October 7th, pp.32-88

204

The Economist (1991) “Where tigers breed: a survey of Asia’s emerging Economies”, November 16th, pp.5-24 The Economist (1994) “Oriental Renaissance – A Survey of Japan”, July 9th, 1994 The Economist (2002) “Report on East Asian Economies”, July 2002, pp.65-67 The Economist (2005) “A Survey of Malaysia”, April 5th, 2005 The Economist (2005) “The Sun Also Rises – A Survey of Japan”, October 8th, 2005 The Economist (2007) “Troubled Tigers”, January 31st, 2007, pp.67-69 The Economist (2007) “Briefing East Asian Economies”, June 30th, 2007 The Economist (2013) “Japan and Abenomics”, May 18th, 2013, pp.21-23