Quick viewing(Text Mode)

The Unapologetic Blackness of the Me Too Movement

The Unapologetic Blackness of the Me Too Movement

Gen Z, the Fourth Industrial Revolution, and African Universities

By Paul Tiyambe Zeleza

Like many of you, I try to keep up with trends in higher education, which are of course firmly latched to wider transformations in the global political economy, in all its bewildering complexities and contradictions, and tethered to particular national and local contexts. Of late one cannot avoid the infectious hopes, hysteria, and hyperbole about the disruptive power of the 4th Industrial Revolution on every sector, including higher education. It was partly to make sense of the discourses and debates about this new revolution that I chose this topic.

But I was also inspired by numerous conversations with colleagues in my capacity as Chair of the Board of Trustees of the Education Network Trust (KENET) that provides Internet connectivity and related services to enhance education and research to the county’s educational and research institutions. Also, my university has ambitious plans to significantly expand its programmes in science, technology, engineering and mathematics (STEM), the health sciences, and the cinematic and creative arts, in which discussions about the rapid technological changes and their impact on our educational enterprise feature prominently.

I begin by briefly underlining the divergent perspectives on the complex, contradictory and rapidly changing connections between the 4th Industrial Revolution and higher education. Then I seek to place it in the context of wider changes. First, in terms of global politics and economy. Second, with reference to the changing nature of work. Third, in the context of other key trends in higher education. Situating the 4th Industrial Revolution in these varied and intersected changes and dynamics underscores a simple point: that it is part of a complex mosaic of profound transformations taking place in the contemporary world that precede and supersede it.

As a historian and social scientist, I’m only too aware that technology is always historically and socially embedded; it is socially constructed in so far as its creation, dissemination, and consumption are always socially marked. In short, technological changes, however momentous, produce and reproduce both old and new opportunity structures and trajectories that are simultaneously uneven and unequal because they are conditioned by the enduring social inscriptions of class, gender, race, nationality, ethnicity and other markers, as well as the stubborn geographies and hierarchies of the international division of labour.

The 4th Industrial Revolution

As with any major social phenomena and process, the 4th Industrial Revolution has its detractors, cheerleaders, and fence-sitters. The term often refers to the emergence of quantum computing, artificial intelligence, the Internet of things, machine learning, data analytics, big data, robotics, biotechnology, nanotechnology, and the convergence of the digital, biological, and physical domains of life.

Critics dismiss the 4th Industrial Revolution as a myth, arguing that it is not a revolution as such in so far as many innovations associated with it represent extensions of previous innovations. Some even find the euphoric discourses about it elitist, masculinist, and racist. Some fear its destructive potential for jobs and livelihoods, and privacy and freedom as surveillance capitalism spreads its tentacles.

Those who espouse its radical impact say that the 4th Industrial Revolution will profoundly transform all spheres of economic, social, cultural, and political life. It is altering the interaction of humans with technology, leading to the emergence of what Yuval Noah Harari calls homo deus who worships at the temple of dataism in the name of algorithms. More soberly, some welcome the 4th Industrial Revolution for its leapfrogging opportunities for developing countries and marginalised communities. But even the sceptics seek to hedge their bets on the promises and perils of the much- hyped revolution by engaging it.

In the education sector, universities are urged to help drive the 4th Industrial Revolution by pushing the boundaries of their triple mission of teaching and learning, research and scholarship, public service and engagement. Much attention focuses on curricula reform, the need to develop what one author calls “future-readiness” curricula that prepares students holistically for the skills of both today and tomorrow – curricula that integrates the liberal arts and the sciences, digital literacy and intercultural literacy, and technical competencies and ethical values, and that fosters self-directed and personalised learning. Because of the convergences of the 4th Industrial Revolution, universities are exhorted to promote interdisciplinary and transdisciplinary teaching, research and innovation, and to pursue new modes of internationalisation of knowledge production, collaboration, and consumption.

Changes in the global political economy

From ’s vantage point, I would argue there are three critical global forces that we need to pay special attention to. First, the world system is in the midst of a historic hegemonic shift. This is evident in the growing importance of Asia and the emerging economies, including Africa and impending closure of Euroamerica’s half a millennium of global dominance. Emblematic of this monumental transition is the mounting rivalry between a slumping and a rising China that is flexing its global muscles not least through the Belt and Road Initiative.

Those who espouse its radical impact say that the 4th Industrial Revolution will profoundly transform all spheres of economic, social, cultural, and political life. It is altering the interaction of humans with technology, leading to the emergence of what Yuval Noah Harari calls homo deus who worships at the temple of dataism in the name of algorithms.

The struggle between the two nations and their respective allies or spheres of influence marks the end of America’s supremacy as the sole post-Cold War superpower. The outbreak of the trade war between the two in 2018 represents the first skirmishes of a bitter hegemonic rivalry that will probably engulf at least the first half of the 21st century. The question we have to ask ourselves is: How should Africa manage and position itself in this global hegemonic shift?

This is the third such shift over the last two hundred years. The first occurred between 1870-1914 following the rise of Germany and its rivalry with the world’s first industrial power, Britain. For the world as a whole this led to the “New Imperialism” that culminated in World War I, and for Africa and Asia in colonisation.

The second hegemonic shift emerged out of the ashes of World War II with the rise of two superpowers, the former Soviet Union and the United States. For the world this led to the Cold War and for Asia and Africa decolonisation.

Can Africa leverage the current shift to achieve its long-cherished but deferred dream of sustainable development?

As the highest concentrations of collective intellectual prowess, African universities and researchers have a responsibility to promote comprehensive understanding of the stakes for Africa, and to inform policy options on how best to navigate the emerging treacherous quagmire of the new superpower rivalries to maximise the possibilities and minimise the perils.

More broadly, in so far as China’s and Asia’s rise are as much economic as they are epistemic – as evident in the exponential ascent of Asian universities in global rankings – the challenge and opportunity for our universities and knowledge production systems is how best to pluralise worldly engagements that simultaneously curtail the Western stranglehold rooted in colonial and neocolonial histories of intellectual dependency without succumbing to the hegemonic ambitions of China and Asia.

Second, world demography is undergoing a major metamorphosis. On the one hand, this is evident in the aging populations of many countries in the global North. China is also on the same demographic treadmill, thanks to its ill-guided one-child policy imposed in 1979 that was only abolished in 2015. On the other hand, Africa is enjoying a population explosion. Currently, 60 per cent of the African population is below the age of 25. Africa is expected to have 1.7 billion people in 2030 (20 per cent of the world’s population), rising to 2.53 billion (26 per cent of the world’s population) in 2050, and 4.5 billion (40 per cent of the world’s population) in 2100.

What are the developmental implications of Africa’s demographic bulge, and Africa’s global position as it becomes the reservoir of the world’s largest labour force? The role of educational institutions in this demographic equation is clear. Whether Africa’s skyrocketing population is to be a demographic dividend or not will depend on the quality of education, skills, and employability of the youth. Hordes of hundreds of millions of ill-educated, unskilled, and unemployable youth will turn the youth population surge into a demographic disaster, a Malthusian nightmare for African economies, polities and societies.

As the highest concentrations of collective intellectual prowess, African universities and researchers have a responsibility to promote comprehensive understanding of the stakes for Africa, and to inform policy options on how best to navigate the emerging treacherous quagmire of the new superpower rivalries to maximise the possibilities and minimise the perils.

The third major transformative force centers on the impact of the 4th Industrial Revolution. During the 1st Industrial Revolution of the mid-18th century, Africa paid a huge price through slave trade that laid the foundations of the industrial economies of Euroamerica. Under the 2nd Industrial Revolution of the late 19th century, Africa was colonised. The 3rd Industrial Revolution that emerged in the second half of the 20th century coincided with the tightening clutches of neocolonialism for Africa. What is and will be the nature of Africa’s levels of participation in the 4th Industrial Revolution. Will the continent be a player or a pawn as in the other 3 revolutions?

The future of work

There is a growing body of academic literature and consultancy reports about the future of work. An informative summary can be found in a short monograph published by The Chronicle of Higher Education. In “The Future of Work: How Colleges Can Prepare Students for the Jobs Ahead”, it is argued that the digitalisation of the economy and social life spawned by the 4th Industrial Revolution will continue transforming the nature of work as old industries are disrupted and new ones emerge. In the United States, it is projected that the fastest growing fields will be in science, technology, engineering, and healthcare, while employment in manufacturing will decline. This will enhance the importance of the soft skills of the liberal arts, such as oral and written communication, critical thinking and problem solving, teamwork and collaboration, intercultural competency, combined with hard technical skills, like coding.

In short, while it is difficult to predict the future of work, more jobs will increasingly require graduates to “fully merge their training in hard skills with soft skills”. They will be trained in both the liberal arts and STEM, with skills for complex human interactions, and capacities for flexibility, adaptability, versatility, and resilience.

In a world of rapidly changing occupations, the hybridisation of skills, competencies, and literacies together with lifelong learning will become assets. In a digitalised economy, routine tasks will be more prone to automation than highly skilled non-routine jobs. Successful universities will include those that impart academic and experiential learning to both traditional students and older students seeking retraining.

The need to strengthen interdisciplinary and experiential teaching and learning, career services centres, and retraining programmes for older students on college campuses is likely to grow. So will partnerships between universities and employers as both seek to enhance students’ employability skills and reduce the much-bemoaned mismatches between graduates and the labour market. The roles of career centres and services will need to expand in response to pressures for better integration of curricula programmes, co-curricula activities, community engagement, and career preparedness and placement. In short, while it is difficult to predict the future of work, more jobs will increasingly require graduates to “fully merge their training in hard skills with soft skills”. They will be trained in both the liberal arts and STEM, with skills for complex human interactions, and capacities for flexibility, adaptability, versatility, and resilience.

Some university leaders and faculty of course bristle at the vocationalisation of universities, insisting on the primacy of intellectual inquiry, learning for its own sake, and student personal development. But the fraught calculus between academe and return on investment cannot be wished away for many students and parents. For students from poorer backgrounds, intellectual development and career preparedness both matter as university education maybe their only shot at acquiring the social capital that richer students have other avenues to acquire.

Trends in higher education

Digital Disruptions

Clearly, digital disruptions constitute one of the key four interconnected trends in higher education that I seek to discuss. The other three include rising demands for public service and engagement, unbundling of the degree, and escalating imperatives for lifelong learning.

More and more, digitalisation affects every aspect of higher education, including research, teaching, and institutional operations. Information technologies have impacted research in various ways, including expanding opportunities for “big science” and increasing capacities for international collaboration. The latter is evident in the exponential growth in international co-authorship.

Also, the explosion of information has altered the role of libraries as repositories of print and audio- visual materials into nerve centres for digitised information communication, which raises the need for information literacy. Moreover, academic publishing has been transformed by the acceleration and commercialisation of scholarly communication. The role of powerful academic publishing and database firms has greatly been strengthened. The open source movement is trying to counteract that.

Similarly far reaching is the impact of information technology on teaching and learning. Opportunities for technology-mediated forms of teaching and learning encompassing blended learning, flipped classrooms, adaptive and active learning, and online education have grown. This has led to the emergence of a complex melange of teaching and learning models encompassing the face-to-face-teaching model without ICT enhancement; ICT-enhanced face-to-face teaching model; ICT-enhanced distance teaching model; and the online teaching model.

Spurred by the student success movement arising out of growing public concerns about the quality of learning and the employability skills of graduates, “the black box of college”—teaching and learning—has been opened, argues another recent monograph by The Chronicle entitled, “The Future of Learning: How colleges can transform the educational experience”. The report notes, “Some innovative colleges are deploying big data and predictive analytics, along with intrusive advising and guided pathways, to try to engineer a more effective educational experience. Experiments in revamping gateway courses, better connecting academic and extracurricular work, and lowering textbook costs also hold promise to support more students through college.” For critics of surveillance capitalism, the arrival of Big Brother on university campuses is truly frightening in its Orwellian implications.

There are other teaching methods increasingly driven by artificial intelligence and technology that include immersive technology, gaming, and mobile learning, as well as massive open online courses (MOOCs), and the emergence of robot tutors. In some institutions, instructors who worship at the altar of innovation are also incorporating free, web-based content, online collaboration tools, simulation or educational games, lecture capture, e-books, in-class polling tools, as well as student smartphones and tablets, , and e-portfolios as teaching and learning tools.

Some of these instructional technologies make personalised learning for students increasingly possible. The Chronicle monograph argues for these technologies and innovations, such as predictive analytics, to work it is essential to use the right data and algorithms, cultivate buy-in from those who work most closely with students, pair analytics with appropriate interventions, and invest enough money. Managing these innovations entails confronting entrenched structural, financial, and cultural barriers,and “require investments in training and personnel”.

For many under-resourced African universities with inadequate or dilapidated physical and electronic infrastructures, the digital revolution remains a pipe dream. But such is the spread of smart phones and tablets even among growing segments of African university students that they can no longer be effectively taught using old pedagogical methods of the born-before-computers (BBC) generation. After spending the past two decades catering to millennials, universities now have to accommodate Gen Z, the first generation of truly digital natives.

Another study from The Chronicle entitled “The New Generation of Students: How colleges can recruit, teach, and serve Gen Z” argues that this “is a generation accustomed to learning by toggling between the real and virtual worlds…They favoir a mix of learning environments and activities led by a professor but with options to create their own blend of independent and group work and experiential opportunities”.

For Gen Z knowledge is everywhere. “They are accustomed to finding answers instantaneously on Google while doing homework or sitting at dinner…They are used to customisation. And the instant communication of texting and status updates means they expect faster feedback from everyone, on everything.”

For such students, the instructor is no longer the sage on stage from whom hapless students passively imbibe information through lectures, but a facilitator or coach who engages students in active and adaptive learning. Their ideal instructor makes class interesting and involving, is enthusiastic about teaching, communicates clearly, understands students’ challenges and issues and gives guidance, challenges students to do better as a student or as a person, among several attributes.

For Gen Z knowledge is everywhere. “They are accustomed to finding answers instantaneously on Google while doing homework or sitting at dinner…They are used to customisation. And the instant communication of texting and status updates means they expect faster feedback from everyone, on everything.”

Teaching faculty to teach the digital generation, and equipping faculty with digital competency, design thinking, and curriculum curation, is increasingly imperative. The deployment of digital technologies and tools in institutional operations is expected to grow as universities seek to improve efficiencies and data-driven decision-making. As noted earlier, the explosion of data about almost everything that happens in higher education is leading to data mining and analytics becoming more important than ever. Activities that readily lend themselves to IT interventions include enrollment, advising, and management of campus facilities. By the same token, institutions have to pay more attention to issues of data privacy and security. Public Service Engagements

The second major trend centres on rising expectations for public engagement and service. This manifests itself in three ways. First, demands for mutually beneficial university-society relationships and the social impact of universities are increasing. As doubts grow about the value proposition of higher education, pressures will intensify for universities to demonstrate their contribution to the public good in contributing to national development and competitiveness, notwithstanding the prevailing neoliberal conceptions of higher education as a private good.

On the other hand, universities’ concerns about the escalating demands of society are also likely to grow. The intensification of global challenges, from climate change to socio- to geopolitical security, will demand more research and policy interventions by higher education institutions. A harbinger of things to come is the launch in 2019 by the Times Higher Education of a new global ranking system assessing the social and economic impact of universities’ innovation, policies and practices.

Second, the question of graduate employability will become more pressing for universities to address. As the commercialisation and commodification of learning persists, and maybe even intensifies, demands on universities to demonstrate that their academic programmes prepare students for employability in terms of being ready to get or create gainful employment can only be expected to grow. Pressure will increase on both universities and employers to close the widely bemoaned gap between college and jobs, between graduate qualifications and the needs of the labour market.

Third is the growth of public-private partnerships (PPPs). As financial and political pressures mount, and higher education institutions seek to focus on their core academic functions of teaching and learning, and generating research and scholarship, many universities have been outsourcing more and more of the financing, design, building and maintenance of facilities and services, including student housing, food services, and monetising parking and energy. Emerging partnerships encompass enrollment and academic programme management, such as online programme expansion, skills training, student mentoring and career counseling.

Another Chronicle monograph, “The Outsourced University: How public-private partnerships can benefit your campus”, traces the growth of PPPs. They take a variety of forms and duration. It is critical for institutions pursuing such partnerships to determine whether a “project should be handled through a P3,” clearly “articulate your objectives, and measure your outputs,” to “be clear about the trade-offs,” “bid competitively,” and “be clear in the contract.”

The growth of PPPs will lead to greater mobility between the public and private sectors and the academy as pressures grow for continuous skilling of students, graduates, and employees in a world of rapidly changing jobs and occupations. This will be done through the growth of experiential learning, work-related learning, and secondments.

Unbundling of the Degree

The third major transformation that universities need to pay attention to centers on their core business as providers of degrees. This is the subject of another fascinating monograph in The Chronicle entitled “The Future of The Degree: How Colleges Can Survive the New Credential Economy”. The study shows how the university degree evolved over time in the 19th and 20th centuries to become a highly prized currency for the job market, a signal that one has acquired a certain level of education and skills. As economies undergo “transformative change, a degree based on a standard of time in a seat is no longer sufficient in an era where mastery is the key. As a result, we are living in a new period in the development of the degree, where different methods of measuring learning are materialising, and so too are diverse and efficient packages of credentials based on data.”

In a digitalized economy where continuous reskilling becomes a constant, the college degree as a one-off certification of competence, as a badge certifying the acquisition of desirable social and cultural capital, and as a convenient screening mechanism for employers, is less sustainable.

Clearly, as more employers focus on experience and skills in hiring, and as the mismatch between graduates and employability persists or even intensifies, traditional degrees will increasingly become less dominant as a signal of job readiness, and universities will lose their monopoly over certification as alternative credentialing systems emerge.

As experiential learning becomes more important, the degree will increasingly need to embody three key elements. First, it needs to “signify the duality of the learning experience, both inside and outside the classroom. Historically, credentials measured the learning that happened only inside the university, specifically seat time inside a classroom.”

Second, the “credential should convey an integrated experience…While students are unlikely to experience all of their learning for a credential on a single campus in the future, some entity will still need to help integrate and certify the entire package of courses, internships, and badges throughout a person’s .”

Third, credentials “must operate with some standard… For new credentials to matter in the future, institutions will need to create a common language of exchange” beyond the current singular currency of an institutional degree.

The rise of predictive hiring to evaluate job candidates and people analytics in the search for talent will further weaken the primacy of the degree signal. Also disruptive is the fact that human knowledge, which used to take hundreds of years, and later decades, to double is now “doubling every 13 months, on average, and IBM predicts that in the next couple of years, with the expansion of the internet of things, information will double every 11 hours. That requires colleges and universities to broaden their definition of a degree and their credential offerings.”

All these likely developments have serious implications for the current business model of higher education. Universities need “to rethink what higher education needs to be — not a specific one-time experience but a lifelong opportunity for learners to acquire skills useful through multiple careers. In many ways, the journey to acquire higher education will never end. From the age of 18 on, adults will need to step in and out of a higher-education system that will give them the credentials for experiences that will carry currency in the job market.”

In short, as lifelong careers recede and people engage in multiple careers, not just jobs, the quest for higher education will become continuous, no longer confined to the youth in the 18-24 age range. “Rather than existing as a single document, credentials will be conveyed with portfolios of assets and data from learners demonstrating what they know.”

Clearly, as more employers focus on experience and skills in hiring, and as the mismatch between graduates and employability persists or even intensifies, traditional degrees will increasingly become less dominant as a signal of job readiness, and universities will lose their monopoly over certification as alternative credentialing systems emerge. Increasing pressures of life for lifelong learning will lead to the unbundling of the degree into project-based degrees, hybrid baccalaureate and Master’s degrees, ‘microdegrees’, and badges. Students will increasingly stack their credentials of degrees and certificates “to create a mosaic of experiences that they hope will set them apart in the job market”.

As African educators we must ask ourselves: How prepared are our universities for the emergence and proliferation of new credentialing systems? How are African universities effectively integrating curricular and co-curricular forms of learning in person and online learning? How prepared and responsive are African universities to multigenerational learners, traditional and emerging degree configurations and certificates? What are the implications of the explosion of instructional information technologies for styles of student teaching and learning, the pedagogical roles of instructors, and the dynamics of knowledge production, dissemination, and consumption?

Lifelong Learning

The imperatives of the digitalised economy and society for continuous reskilling and upskilling entail lifelong and lifewide learning. The curricula and teaching for lifelong learning must be inclusive, innovative, intersectional, and interdisciplinary. It entails identifying and developing the intersections of markets, places, people, and programmes; and helping illuminate the powerful intersections of learning, life, and work. Universities need to develop more agile admission systems by smarter segmentation of prospective student markets (e.g., flexible admission by age group and academic programme); some are exploring lifelong enrollment for students (e.g., National University of Singapore).

Lifelong learning involves developing and delivering personalised learning, not cohort learning; assessing competences, not seat tim,e as most universities currently do. “Competency-based education allows students to move at their own pace, showcasing what they know instead of simply sitting in a classroom for a specific time period.”

Lifelong learning requires encouraging enterprise education and an entrepreneurial spirit among students, instilling resilience among them, providing supportive environments for learning and personal development, and placing greater emphasis on “learning to learn” rather than rote learning of specific content.

As leaders and practitioners in higher education, we need to ask ourselves some of the following questions: How are African universities preparing for and going to manage lifelong learning? How can universities effectively provide competency-based education? How can African universities encourage entrepreneurial education without becoming glorified vocational institutions, and maintain their role as sites of producing and disseminating critical scholarly knowledge for scientific progress and informed citizenship?

Conclusion

In conclusion, the 4th Industrial Revolution is only one of many forces forcing transformations in higher education. As such, we should assess its challenges and opportunities with a healthy dose of intellectual sobriety, neither dismissing it with Luddite ideological fervour nor investing it with the omniscience beloved by techno-worshippers. In the end, the fate of technological change is not pre- determined; it is always imbricated with human choices and agency.

At my university, the United States International University (USIU)-Africa, we’ve long required all incoming students to take an information technology placement test as a way of promoting information literacy; we use an ICT instructional platform (Blackboard), embed ICT in all our institutional operations, and we are increasingly using data analytics in our decision-making processes. We also have a robust range of ICT degree programmes and are introducing new ones (BSc in software engineering, data science and analytics, AI and robotics, an MSc in cybersecurity, and a PhD in Information Science and Technology), and what we’re calling USIU-Online.

This article is the plenary address by Paul Tiyambe Zeleza at the Universities South Africa, First National Higher Education Conference, “Reinventing SA’s Universities for the Future” CSIR ICC, Pretoria, October 4, 2019.

Published by the good folks at The Elephant.

The Elephant is a platform for engaging citizens to reflect, re-member and re-envision their society by interrogating the past, the present, to fashion a future.

Follow us on .

Gen Z, the Fourth Industrial Revolution, and African Universities

By Paul Tiyambe Zeleza Let me tell you a story.

I thought Rosa Parks was an old woman who refused to give up her seat on the bus because she was too tired to stand after working all day.

Me too.

I thought , the woman who accused Supreme Court Justice Clarence Thomas of sexually harassing her when he was her boss, was quite possibly just plain lying.

Me too.

I thought the present-day feminist revolution that is changing conversations around the world was started on Twitter by a white actress.

Me too.

Let me tell you a story about the truth behind all those fictions. It’s a story about how the world changed for all American women (and many women in other countries) because of the strength, courage, and integrity of three : Rosa Parks, Anita Hill, and Tarana Burke.

On October 15, 2017, Hollywood actress tweeted an invitation to her Twitter followers to respond to a suggestion from a friend of hers: “If all the women who have been sexually harassed or assaulted wrote ‘Me Too’ as a status, we might give people a sense of the magnitude of the problem.”

That tweet sparked a response which has indeed become a kind of online census of victimhood. This is the crucial thing to recognise about the moment when #MeToo met social media: that it inaugurated a census. It gave people who had experienced sexual predation a way to stand up publicly and be counted.

Sexual assault and harassment complaints are habitually dismissed when they are made against privileged men, and especially when they are made against powerful men. And the victims who make these complaints are disparaged as attention-seeking, opportunistic, and vengeful.

The accusation of opportunism, in particular, suggests that the accusers believe that going public about having been degraded sexually somehow confers a glorious upon them (a fiction belied by everything we know about the under-reporting of in societies around the world). And the disparagement presents consequences to the perpetrators (if the complainant is believed). Loss of prestige or reputation are viewed as worse than the consequences of the assault or harassment itself, which include the trauma and post-traumatic stress that have for years been recognised as consequences of violence generally, and are now finally being acknowledged as consequences of sexual violence.

Sexual assault and harassment complaints are habitually dismissed when they are made against privileged men, and especially when they are made against powerful men. And the victims who make these complaints are disparaged as attention-seeking, opportunistic, and vengeful.

What #MeToo exposed was not a cabal of vengeful feminists but an ubiquitousness and normalisation of sexual predation, often by powerful and influential men, who are, therefore, socially recognised as more credible than their victims. This choice of victims is not an accident; predators target those they believe will be considered unimportant precisely so that they will be able to discredit any complaints that might be made.

The credibility of complaints is attacked on the grounds of the complainant’s race, social status, national origin, and most especially—when the perpetrator is male and the victim is female—on the basis of gender. Even today, as the United Nations identifies as one of its significant Sustainability Development Goals, in most countries, women’s voices are not accorded as much credibility as men’s voices in law courts, in police stations, and in public discourse.

Individuals who choose to behave in predatory ways are sheltered from the consequences of their behaviour by widespread beliefs that women lie about being victimised. In fact, because of social shaming around women’s sexuality, women are more likely to stay silent about things that did happen rather than to manufacture things that did not happen.

Predatory individuals are sheltered by the suspicion that allegations of this kind are likely to be false. In fact, the rate of false reporting of, for instance, rape allegations is about the same as the rate of false reporting for other felonies. In addition, predatory individuals are sheltered by demands for “objective proof” that are not demanded in other types of criminal accusations. In fact, a victim’s accusation of fraud, theft, or other forms of violence is sufficient to trigger an investigation, and multiple accusations are sufficient to establish a pattern of behaviour on the part of the perpetrator that is considered circumstantial evidence of their wrongdoing.

Victims of sexual predation know these differences; fear of not being believed is the primary factor explaining why only about 35 per cent of sexual assault cases in the United States are ever reported (which is actually relatively high when compared to a country like , where the reporting rate is estimated to be under 5 per cent). The #MeToo social media census was a space in which victims could self-identify without being invalidated.

But before there was Alyssa Milano’s call to stand up and be counted, there was more than a decade of activism and solidarity with sexually abused African-American girls that was being carried out by Tarana Burke, the civil rights activist and community organiser who coined the term. There was “Me Too” long before there was #MeToo. There was a black woman, this black woman, doing anti-sexual assault work and victim support long before there was any widespread public discussion by white liberal feminists of the problems of sexual entitlement and predation by wealthy and powerful men. This trail-blazing by black women is also not an accident.

The civil rights movement and the struggle for women’s rights

There is a long history in the United States of advocacy for women and struggles for women’s rights to control our own bodies. That history is grounded in the community organising that black women have done for and with each other, and it has gone largely unrecognised until quite recently.

In 2010, historian Danielle L. McGuire wrote a book about how the civil rights movement of the 1950s and 1960s that is now most closely associated in the popular imagination with Reverend Martin Luther King Jr. owes its existence to the tireless work of black women in the southern states against racialised sexual violence. McGuire’s book, At the Dark End of the Street, documents the campaigns and community organising of black women working in churches and with the venerable National Association for the Advancement of Colored People (NAACP) to demand equal justice under the law for black women who had been raped and sexually terrorised.

There is a long history in the United States of advocacy for women and struggles for women’s rights to control our own bodies. That history is grounded in the community organising that black women have done for and with each other…

One such campaign, directed by the NAACP, was organised to demand the arrest and trial of the seven white men who were responsible for the 1944 gang rape of an Alabama woman named Recy Taylor. The newly-hired NAACP branch secretary who organised the campaign was Rosa Parks. Eleven years later, the advocacy alliance she helped to form, the Committee for Equal Justice for Mrs. Recy Taylor, would become the Montgomery Improvement Association, the support organisation for the 1955 Montgomery bus boycott that launched the civil rights movement.

Contrary to the mythology that constructs this society-changing coalition as Dr. King’s heroic challenge of white supremacy, it was a movement built by black women like Rosa Parks. She was no tired old woman the day she refused to give up her seat on the Montgomery bus; she was a trained and accomplished activist. And although Recy Taylor never did get justice for the sexual violence she endured, the principle Rosa Parks was fighting for—that sexual violence against black women should be treated as seriously under the law as sexual violence against white women—was finally upheld as a legal precedent in 1959 when the four white rapists of Betty Jean Owens in Tallahassee, Florida were convicted and sentenced for their crime against her.

Almost 50 years after black women mobilised communities across the South to petition for Recy Taylor’s right to face her attackers in court, a black woman named Anita Hill testified in front of an all-white, all-male panel of US Senators in the nation’s capital, Washington DC. The men were there to confirm conservative black judge Clarence Thomas to the Supreme Court seat that had been vacated by the retirement of civil rights icon Thurgood Marshall. The woman, a law professor, was there to inform them that when she had worked for Thomas a decade previously at the US Department of Education and the Equal Employment Opportunity Commission, he had engaged in sustained of her that called into question the good character which, due to his inexperience on the bench, had been cited as the primary evidence of his overall fitness to serve on the highest court in the country’s legal system.

The year was 1991. The term “sexual harassment” had been coined by the back in the 1960s but it was not a widely understood phenomenon in 1991 and there was, at the time, little appreciation for how pervasive it was in workplaces. As law professor and critical race theorist Kimberlé Crenshaw notes in a 2018 New York Times op-ed, it was Hill’s testimony of Thomas’s persistent pressure on her to date him, his discussion of explicit pornography he liked to watch, and comments about his own sexual prowess—all taking place in the offices in which she served as his assistant—that produced America’s “great awakening around sexual harassment”.

However, as Crenshaw also notes, the lessons Anita Hill’s testimony might have taught the country were inadequately learned: Thomas was confirmed to the Supreme Court where he serves to this day, alongside fellow alleged sexual predator . Hill, in her own 2019 New York Times op-ed, suggests the intriguing possibility that what we now know as the #MeToo movement could have started as far back as 1991, if only that Senate Judiciary Committee panel had listened seriously to her testimony (and that of the corroborating witnesses they never bothered to call).

In the wake of Anita Hill and in the tradition of Rosa Parks came the response of Tarana Burke to a 1997 conversation with a 13-year-old black girl who confided that she had been sexually abused. As “Me Too” broke into the American popular consciousness in October 2017, Burke recalled to a New York Times reporter that this confession had left her speechless and troubled. Having worked with and advocated for marginalised young women since she was a teenager, she belatedly realised that the most appropriate response she could have given the girl was quite simply “Me too”.

Almost a decade after that moment, in 2006, she created a movement to marshal resources for other young victims of sexual harassment and assault—resources she wished had been available to her and to the 13-year-old girl who had called her to this particular strand of her life-long activism—and began promoting the phrase “me too” as a way of raising awareness of the pervasiveness of sexual violence, and as a way of supporting survivors of that violence.

However, as Crenshaw also notes, the lessons Anita Hill’s testimony might have taught the country were inadequately learned: Thomas was confirmed to the Supreme Court where he serves to this day, alongside fellow alleged sexual predator Brett Kavanaugh.

The black roots of “Me Too” are, I think, crucial to understanding what it is trying to achieve, and how. I spoke at the outset of “Me Too” as a census, and I believe that is a useful way to understand how it has functioned in its #MeToo Twitter incarnation. But a robust understanding of “Me Too” as a solidarity gesture has to acknowledge its contextual association with the call-and-response traditions of black music and black vernacular English. Me too is a call and a response: me too … you too?… yes, me too.

Emerging voices

The campaigns for justice and respect to which Rosa Parks, Anita Hill, and Tarana Burke have all contributed their efforts to make a world in which black women are honoured members of their communities have changed things. There is more bodily autonomy for all women in the United States today (challenged and under threat, to be sure, but present). There is more awareness of what it means to have to navigate a “hostile workplace”, and there is more support for the women and men, girls and boys, who have been harmed by sexual violence.

Even as these women are being acknowledged for their courage and dedication, however, and even as black feminist scholarship strives to make sure the contributions to American life of other black women—Fannie Lou Hamer, Ella Baker, Ida B. Wells, Anna Julia Cooper, to name only a few—are not forgotten, it is important to remember that the tradition of black activism in the US is a communal one. In interview about her role in the , Tarana Burke dismisses the initial controversy about Alyssa Milano hijacking her movement and erasing her contribution by tweeting a “Me Too” invitation that did not credit Burke, saying that it is selfish to frame a movement around one person. Movements should be about amplifying the voices of the community, the survivors, she concluded. (And it should be noted that Milano, who had initially been unaware of the origin of the phrase, swiftly corrected her oversight and has subsequently been vocal in promoting Burke’s Me Too campaign.)

Similar sentiments about the plurality of sources for activist movements and the value of horizontal (non-hierarchical) organising structures are expressed by in connection with the (BLM) movement in a 2016 New Yorker article. The article is a fascinating analysis of how Black Lives Matter emerged as a voice for racial justice from a self-consciously intersectional point of view. It reminds readers that we all engage the world through multiple identities (race, gender, age, sexual orientation), and makes space for Garza’s argument that effective activism to make American society less hostile towards black lives needs to foreground not just a commitment to “unapologetic blackness” but also to an “unapologetically queer” focus. (She notes, for instance, that of the 53 recorded murders of transgender people between 2013 and 2015, 39 were African- American.)

Author Jelani Cobb contrasts the history of black organising in the 1960s, with its emphasis on top- down leadership, with the more diffuse structures of BLM: the three black women often credited with creating it—Alicia Garza, Patrisse Cullors, and Opal Tometi—are carefully distinguished as architects of BLM’s online organisation, while credit for the movement that arose out of over Mike Brown’s 2014 murder in Ferguson, Missouri, is given to DeRay Mckesson, Brittany Packnett, and Johnetta Elzie. But, as Garza notes, BLM is not about consolidating power in an identifiable leadership hierarchy; it works like traditional labour organising did, and like Ella Baker (the civil rights activist who served in leadership in both the Southern Christian Leadership Conference and the Student Nonviolent Coordinating Committee) did, it reaches out to the people “at the bottom”, tapping the creativity and energy of the whole community.

The story we need to be hearing and telling, then, in this age of “Me Too” is not just about black women’s leadership, but about the tradition of leadership deeply embedded in black women’s community activism. The power is in the people, and the people need to be heard. Call and response.

The civil rights movement is (mis)remembered as a movement of black men for racial equality; Black Lives Matter is perceived in the media as a redress movement organised exclusively or predominantly around black men murdered by a systemically racist policing structure. In both cases, men’s names are foregrounded in activist histories that have been built up out of women’s labour and include women’s experiences. (Sandra Bland, say her name.)

As we tell the story of Me Too, let’s not forget or overlook the centrality of black women’s struggles for control over their own bodies in the evolution of contemporary activism against .

Published by the good folks at The Elephant.

The Elephant is a platform for engaging citizens to reflect, re-member and re-envision their society by interrogating the past, the present, to fashion a future.

Follow us on Twitter. Gen Z, the Fourth Industrial Revolution, and African Universities

By Paul Tiyambe Zeleza

Panashe Chigumadzi’s long, digressive article, “Black Skins White Masks Revisited: Why I am No Longer Talking to about Race”, on the necessity of Nigerians to engage with the question of race is purposely provocative. It also serves to mislead and misinform. For someone who obviously considers herself eminently qualified to speak in defense of “a radical anti-racist politics”, it would be appropriate to dwell a bit on what precisely are her credentials.

Admittedly, she has confessed to being schooled in white establishments virtually from until her current base at Harvard University. In the essay where she makes this confession (“Of Coconuts, Consciousness and Cecil John Rhodes: Disillusionment and Disavowals of the Rainbow Nation”) she also admits to being a “coconut” (black on the outside, white on the inside, the perennial Fanonian quandary), what conscious African-Americans would call a “coon”, or in earlier times, an “Uncle Tom”. And so on the basis of this “impressive” set of accomplishments, she feels, still under the age of thirty, qualified to challenge a nation of almost 200 million souls to engage with the problem of race in globally explicit ways.

Her other accomplishments include her role at the height of the #FeesMustFall campaign when she was invited to Rhodes University, South Africa, to launch her novel, Sweet Medicine, in 2016. There had been a schism within the black students’ movement between purveyors of radical black thought and “integrationists” of the coconut stripe. White liberals were in full support of the integrationists who had been indoctrinated to misread and misapply the radical teachings of theorists such as and . Chigumadzi had appeared on the platform of the integrationists, obviously at her “coconutic” best.

It really does take some nerve to castigate an entire nation with such incredible blitheness and glibness. It is even more difficult to assimilate when one reviews her “lofty” credentials. Her “coon” education obviously did not prepare her to appreciate the canonical import of works such as Chinua Achebe’s majestic Things Fall Apart whose setting is in faraway , and not nearby within the southern tip of Africa, as she points out in her characteristically digressive essay, “Rights of Conquest, Rights of Desire”, which casually glosses over perhaps the most powerful as well as the most insightful exploration of the colonial encounter in all of literature. Instead she smuggles unwanted black bodies in the midst of racist white angst as if that in itself constitutes a gesture of racial reconciliation. And just like a true coconut, she had to find a place for the swart gevaar (the black threat) by means of the most remarkable kind of Conradian literary inversion.

It really does take some nerve to castigate an entire nation with such incredible blitheness and glibness.

Wole Soyinka, the icon of African literary creativity and redoubtable social activism, is briskly dismissed in the following manner; “Soyinka […] had been so unimpressed and impatient with the Negritude movement spearheaded by the Francophone writers of African descent”. To bolster her point, she cites the now tired and lame quip, “A tiger does not proclaim its tigritude.”

After the usual interminable digressions, she makes a case for “redeeming Nigerian Tigritude” by concluding that Nigerians lack the qualities of empathy and humility to truly become the giants of Africa. You really must possess considerable reserves of patience to isolate her central arguments, namely, Soyinka’s, and by extension, all Nigerians’, appalling unfamiliarity with global race dynamics. Ultimately, this debilitating unawareness precludes Nigerians from being suitable to be at the forefront of African political struggles.

Curiously, she lists the impressive achievements of Nigeria in combating in South Africa through the national levies it imposed on school children, the numerous diplomatic initiatives it launched or participated in, the net donation of 61 billion dollars to the anti-apartheid struggle, and yet she cannot seem to think this is a most empathetic contribution.

Again, strangely, she fails to reflect on the scourge of Afrophobia plaguing South Africa, in which the business enterprises and bodies of foreign nationals – particularly, Somalis, Ethiopians, Pakistanis, Zimbabaweans and Nigerians – are razed almost weekly in exuberant public bouts of xenophobic rage. Of course, it is almost impossible to forge any kind of alliance or solidarity amid such constant orgies of rage, violence and destruction aimed at hapless foreigners. Rather than expect more empathy from Nigerians, it would be more logical to expect more gratitude from the proponents and culprits of Afrophobia.

Let us examine the myth that Nigerians have not been able to formulate the kind of emancipatory race politics Chigumadzi approves. Here, Soyinka immediately comes to mind. When he was eighteen years of age at the then University College Ibadan, Soyinka formed the first campus confraternity along with the likes of renowned Cambridge trained physicist, Muyiwa Awe and others, such as the broadcaster, Ralph Okpara. Their confraternity was established to serve as a bulwark against undue colonial indoctrination on their white-dominated campus. So rather than uncritically accepting the acquiescence and complicities of the coconut, there was already an awareness to question and resist racial oppression and injustice even before he had attained full maturity.

Curiously, she lists the impressive achievements of Nigeria in combating apartheid in South Africa through the national levies it imposed on school children, the numerous diplomatic initiatives it launched or participated in, the net donation of 61 billion dollars to the anti-apartheid struggle, and yet she cannot seem to think this is a most empathetic contribution.

Eventually, Soyinka attended Leeds University to complete his undergraduate course but whilst abroad, he was thinking of returning home once his studies were over. For further personal studies, he sought to recuperate orders of knowledge that had been demonised, suppressed and erased by the agents and machinations of colonialism. It was not long before he adopted Ogun, the Yoruba deity of war, iron and justice, as his special guardian spirit contrary to the Western education he had received and the Christian background of the home in which his parents had raised him.

Soyinka’s inquiry into his beloved ancient Yoruba cosmogony led him to forge lifelong links with other Yoruba-affiliated descendants of the based in Brazil, Cuba, Trinidad and Tobago, other places in the Caribbean and of course, the United States. Undoubtedly, when he visited those countries, he never failed to promote the tigritude of his Yoruba ancestry and cosmogony. Such was the case when he met Henry Louis Gates Jr., the founder and director of the African and African American Studies Center at Harvard where Chigumadzi is currently a PhD student.

At Cambridge, Gates, in various instances, admits that Soyinka had led him on a continuing journey to discover the truths about Africa that had been occluded by racist prevarication and indoctrination. Indeed since then, they have continued to enjoy close and productive collaborations in developing and strengthening the discipline of Africana studies. Gates would also go on to popularise the figure of Esu, the Yoruba deity of the crossroads, wit and intelligence, in his landmark work, The Signifying Monkey (1988). In this work, Gates explores the various appropriations and survivals of Esu within the context of African American culture and literature.

Soyinka’s transcontinental exertions did not end here. He has undertaken missions at his own personal expense to attempt to retrieve invaluable artworks looted from Africa by European colonialists. He was immensely active during FESTAC 1977, the global black festival that brought together artists and intellectuals of all persuasions to to celebrate and promote black cultures the world over. Indeed his efforts and initiatives at seeking and cementing Africana ethics and poetics of solidarity are too numerous to mention and cannot be over-emphasised. In a context when the notion of black excellence is increasingly becoming trite and perhaps meaningless, he remains a lodestar upon which we can begin a proper conversation.

Fela Anikulapo-Kuti is another exemplary figure who contributed enormously to black pride, agency and resurgence in incomparable ways. Incidentally, Anikulapo-Kuti and Soyinka are cousins and so it isn’t a surprise that they share and practise similar kinds of global black solidarity. Anikulapo-Kuti’s radicalism made him adversaries amongst the elite political classes in his native Nigeria and after he was hounded out of his country on account of his vociferous activism and oppositional poetics. Due to his uncompromising radicalism, doors closed on Anikulapo-Kuti everywhere; the foreign- owned record companies at home and abroad shunned him, and the international cartels made it difficult for him to have significant breakthroughs. Radio stations wouldn’t feature his compositions because he would not sing three-minute hits as opposed to the half-hour long tunes of great complexity and ingenuity he favoured.

When established record labels refused to release and market his music, he set up his own channels and platforms. His compositions, in the global era of disco, vacuous entertainment and feel-good seemed out of time by virtue of his trenchant ideological vision, his strident critiques of racism, imperialism, colonialism, neocolonialism and international finance capitalism that impoverished and immiserated more or less all of Africa and much of what was then called the .

During his lifetime, all the wealth Anikulapo-Kuti made was showered on the ill, needy and homeless, and when he passed away in 1997, he had almost nothing to his name, except perhaps, the ever- green radiance and energy of his astonishing compositions.

His work was not confined to the west coast of Africa and its multiple diasporas. When visited Lagos in the early 1970s seeking fresh sources of inspiration, Anikulapo-Kuti hooked him up with the inimitable Ghanaian back-up combo that propelled him to greater musical horizons. Miriam Makeba, Stevie Wonder, Kiki Gyan, , , Sandra Izidore, and Randy Weston, at various times, sought his unparalleled musical artistry and guidance in advancing their own projects. And just like his cousin Soyinka, Anikulapo-Kuti vigorously re- established connections that existed in Africa before the advent of colonialism.

After having studied European classical music and compositional techniques in London during the 1950s, he returned to Nigeria to study the indigenous methods of his ancestral forebears, paying particular attention to the spiritual aspects and trance forms.

Anikulapo-Kuti had every opportunity to be a certified coconut. His mother, Olufunmilayo, is widely regarded as Nigeria’s first modern feminist who visited the socialist countries of Eastern and China on questions of mutual interest. She was also a friend and collaborator of the great exemplar of Pan-Africanist epistemology and praxis, , when he was the President of Ghana.

Anikulapo-Kuti could have led a comfortably sequestrated existence filled with the cheap glories of being a coconut but he chose to align himself with the lowly lot of economic and political outcasts, cultural renegades and oppositional figures of all stripes who naturally irritated the custodians of worldly power. But like a true Pan-Africanist fighter, he elected to remain a thorn in the flesh of decadent and corpulent power until his inevitably tragic end. He excoriated figures, such as P.W. Botha, the Prime Minister of apartheid South Africa, of Great Britain, of the United States, and not least of all, of Nigeria.

Perhaps employing the Pan-Africanist visions of Soyinka and Anikulapo-Kuti, it would be most appropriate to complexify the very notion of “the Nigerian”. Many Nigerians in their reflective moments know that it is an unfortunate and almost unbearable fabrication of the self-serving colonial enterprise. It is, in other words, a geographical entity of tragicomic proportions that was meant to frustrate and undermine its hapless inhabitants.

True, the inhabitants of Nigeria had always interacted in the precolonial days, but the modalities of interaction had been independent of arbitrary colonial interference. On the other hand, the new modalities of co-existence and co-operation had been funneled through the misshapen and counter- productive channels of colonialism. Those channels were not intended for sociopolitical success of postcolonial Nigerians, as they weren’t for most of the colonised world. Anikulapo-Kuti could have led a comfortably sequestrated existence filled with the cheap glories of being a coconut but he chose to align himself with the lowly lot of economic and political outcasts, cultural renegades and oppositional figures of all stripes who naturally irritated the custodians of worldly power.

And so the geographical entities of postcoloniality always pose questions regarding their ultimate viability as largely baseless colonial constructs. However, Chigumadzi is unable to see the incongruity and innate discomfort in saying as a Zimbabwean-born South African (or whatever identity she chooses to adopt), I am able to castigate Nigerians for their perceived lack of empathy and ethics of solidarity. Colonial African geographical constructs were basically not designed for that purpose.

Soyinka has variously denounced this untenable situation with harsh words for the Organisation of African Unity (OAU, the precursor to the present [AU]), which uncritically sanctioned this gross and violent colonial misadventure that should be considered as yet another deleterious scheme to violate and undermine African communities. This is why Nigerians and Ghanaians, for instance, can needlessly squabble over seemingly meaningless and counterproductive trivia without seeing that they had once enjoyed more humane and beneficial relations in abundance before the unwholesome truncation of colonialism. Chigumadzi’s rant is merely an extension of this ahistorical postcolonial mindset, or is it myopia, namely, the inability to interrogate, negate and (re)negotiate colonial African geographical constructs as eternal givens.

If this radical questioning remains always ignored and is not approached with a healthy dose of scepticism, preposterous political scenarios and vast genocidal scenes of utter disarray come to mind that are likely to abound only because we have accepted to be the slavish “coconuts” of unsustainable postcolonial geographical dispensations.

The uncritical subscription to a colonialist project of identification in the wake of the devastation of colonialism that differentiates Zimbabweans, South Africans, Kenyans, Ghanaians or Nigerians as bearers of immutable forms of identity and subsequently pits them constantly against each other, undoubtedly bodes ill for any conception of mutuality, or indeed, solidarity.

But even if we were to subscribe to the colonial geographical markers of identity as Chigumadzi does, Nigerians have been in the forefront of practising Egyptian theorist Samir Amin’s concept of “delinking”. Employing this concept, Amin argues for the decoupling of peripheralised African economies from the invariably inequitable global monetary system that enforces a centre/periphery dichotomy that reduces Africans to suppliers of primary products while the West plays the dominant role of manufacturers as well as incubators of technological innovation and advancement.

Rather than mentioning counter-paradigmatic Nigerian social scientists, such as Ola Oni, Sam Aluko, Adebayo Adedeji, Claude Ake, Bade Onimode, Omafume Onoge , Adebayo Olukoshi and a plethora of others who have offered the most devastating critiques of the Bretton-Woods institutional order that all but crippled the growth of African educational establishments beginning in the 1970s through the toxic mantra of profits-before-people, deregulation and privatisation, Chigumadzi instead chooses to linger on the forgettable work of Chika Onyeani, a reactionary self-nullifying anti-black character, and a darling of the white liberal press in South Africa, who simply does not register in the ever- vibrant discourse of Nigerian socio-economic theory.

If Chigumadzi is really concerned about pursuing a politics of global black emancipation – as she might perhaps imagine herself to be – she ought to be critiquing the bastions of white supremacy that have provided her the leeway from which to cast aspersions on Nigerians. Attacking Nigerians is indeed diversionary as she ought to embark on a quest for reparations for the descendants of the transatlantic slave trade, as the late Nigerian politician, business and philanthropist Moshood K.O. Abiola had with uncommon vigour, commitment and immense sacrifice before his death in 1998.

If Chigumadzi is really concerned about pursuing a politics of global black emancipation – as she might perhaps imagine herself to be – she ought to be critiquing the bastions of white supremacy that have provided her the leeway from which to cast aspersions on Nigerians.

For Chigumadzi to claim Nigerians are unaware of the problem of race is tantamount to ascribing to them an ignorance of a slave trade that wreaked extreme devastation on their territories, and across the entire West African region along with the lands of Angola and the Congo. Ancestral blood from those various territories, in spite of all protestations to the contrary, was largely responsible for creating the wealth of Europe and the Americas as we know them today. An appropriate global politics of black emancipation and inclusivity would need to calibrate these historical realities rather than being cocooned within the safe enclaves of racist power and privilege and then finding easy discursive targets amongst millions of toiling black folk.

Published by the good folks at The Elephant.

The Elephant is a platform for engaging citizens to reflect, re-member and re-envision their society by interrogating the past, the present, to fashion a future.

Follow us on Twitter.

Gen Z, the Fourth Industrial Revolution, and African Universities

By Paul Tiyambe Zeleza The “Ayyaantuu”, are a body of persons within Ethiopia’s Oromo people, whose life’s work is calculating time using a complex system of numerology and astronomy to predict everything from weather patterns for the use of agricultural planning, to moments of societal upheaval.

It is being slowly discovered that they maintained, in their antiquity, a series of astral observatories all along the length of the eastern Rift Valley, through which they had mapped the visible universe, named stars and planets, and developed a calendar system that recycles itself every three hundred and sixty-five years.

Their other tools were a forked sighting staff, still carried by Oromo herdsman today, and the string of a series of lakes along the length of the valley floor that curiously, lie in the pattern of a star system above them.

Perhaps the last of these observatories has been finally acknowledged as such at Namoratunga in northern Kenya, with most of the star-aligned stone pillars still intact.

They had observed a comet, and calculated that it was set to return every seventy-five years.

In 1682, the astronomer Edmond Halley (1656-1742) using Newtonian laws of motion to compute its overall trajectory of the same comet even after it has departed, came to the same conclusion. The comet is now named after him, except in Oromo, where it is called “Gaalessa”.

Gems like this were part of a veritable avalanche of hitherto lesser-documented information that came flooding out during the thirty-third conference of the Oromo Studies Association (OSA) and after. The gathering, held between 26thand 27th July, was historic in many ways. It was the first time the OSA had ever been able to hold a conference on Ethiopian soil.

Out of over 100 papers submitted, there were some fifty-six presentations covering topics ranging from ecological management, history, constitutionalism, culture and economics.

OSA was founded by a group of exiled activists in 1986 in response to a crackdown that had driven those campaigning for greater recognition of the Oromo people and their culture murdered, tortured, jailed or driven out of the country. There is a long and short background to this.

As a people, The Oromo number over thirty-five million in all directions from Addis Ababa, which also was Oromo territory before the founding of the modern Ethiopian state. They consist of a solid whole third of the country’s overall population.

Ethiopia has travelled its own uncolonized journey in the quest to build a modern, unified African country. Nevertheless, this quest has run into many of the same problems experienced by the rest of sub-Saharan Africa, namely what to do with those sections of the population that still defined themselves as other things, other nations even, predating the idea of the new state?

The Oromo number over 35 million in all directions from Addis Ababa, which also was Oromo territory before the founding of the modern Ethiopian state. They consist of a solid whole third of the country’s total population.

In post-European Africa, the story was quite straightforward. Those Africans argued that Africa must re-embrace its indigenous customs and institutions, and set aside the legacies derived from the long European colonial occupation.

The Ethiopian story allowed for the side-stepping of that question, for a while at least. The official argument has always been that the Ethiopian state is an independently-founded African institution, and that therefore those arguments do not apply.

The periods of Emperor (1930-1974) and Colonel Mengistu Hailemariam (1974-1991) saw a fealty to the concept ideal firmly established by Selassie’s predecessor Emperor Menelik (1889-1911): that all of Ethiopia was to be assimilated into one Amharic-speaking Orthodox Christian culture.

The politics of the wars of resistance to Mengistu’s brutal Dergue rule, led to the ascension of a government obliged to make specific statutory recognition of the country’s ethnic landscape, despite the numerous schemes by the new strongman the late Meles Zenawi (1991-2012) to undermine this game-changing arrangement.

The April 2018 resignation of Meles’ successor Prime Minister Desalegn was a direct result of mass protests triggered by the government attempt to expand the boundaries of the already disputed city further into Oromo federal territory.

A reality now exists: a people mobilised in a political context where their previously hidden identities are now constitutionally recognised.

This is the political inheritance Desalegn’s own successor, Prime Minister Abiy Ahmed is currently grappling with.

From its founding, OSA has functioned as a de facto think tank, policy forum and perhaps virtual parliament for the aspired-for Oromiyya nation-state.

Finally, with this homecoming conference, the enforced diaspora was able to meet and encounter those who had never left home, and many in between.

The Oromo point of view is very straightforward: they say they are the largest colony in the Empire set in motion by Emperor Sahle Selassie in the 1840s, and massively militarily expanded by Emperor Menelik II, and then consolidated through a series of recognition treaties with the European powers. Assimilation, and cultural erasure were the particularly emphasized aspects of this process. The Oromo point to a long-standing need for effective decolonization. At the very least, they argue, this should mean the actual implementation of the full meaning of the 1995 Constitution that for the first-time recognized Ethiopia’s separate nations. At the most, it could mean secession (an option also provided for in the same constitution).

Within Ethiopian political discourse (and even beyond), this stance provokes a whole spectrum of reactions, from the deeply considered, to the nakedly visceral. It has been the primary driver of the culture of political intolerance in Ethiopia.

The Oromo point of view is very straightforward: they say they are the largest colony in the Empire set in motion by Emperor Sahle Selassie in the 1840s, and massively militarily expanded by Emperor Melelik II, and then consolidated through a series of recognition treaties with European powers.

Take the case of Ruda Kura, a Sayyoo clan elder, who lived between 1870 and 1974. He endured monstrous deprivations, including being chained to a tree in a public square for three years, and being publicly flogged due to his refusal to pay taxes to, or otherwise endorse the imposed Menelik state structures.

Much of such history is not widely known, not just in wider Ethiopia, but even among the current younger generations of Oromos themselves. And where it is known, there are often numerous academicized and historicized apologia seeking to explain it away.

This is where OSA’s relevance came in.

The first goal was to set the historical record straight, whatever the potential outcomes. This included the possibility of a consensus being arrived at that, despite the long-standing historical injustices, perhaps Ethiopia should just struggle on as a unitary, monolingual state.

But it is simply not possible to have a productive discussion on a way forward, if “half the story has never been told” as aptly put it.

And it is simply not possible to tell that half of the story if it has never been documented, and those carrying it in the hearts and memories are dismissed as unreliable, inauthentic sources, because they do not speak the language of academia.

This was a mission to re-define knowledge, and have it recognized as such.

It is a story with which many other native populations would be familiar. However, in the Ethiopia/Oromo case there was also a very longstanding, vigilant and meticulous system of censorship and policing within academia to prevent this other knowledge being produced in the first place.

OSA was established to carry out an “engaged scholarship” aimed at telling the full Oromo story, recovering and conserving the embattled indigenous knowledge, and researching the continued effects of what they see as a sustained colonial occupation aimed at erasing them.

The significance of the conference revealed itself only slowly, in many public and private moments. The appointed interim President of the Oromo federal unit he opening, and listened to some of the early presentations after making a short speech. This was followed by the mayor of Addis Ababa attending the opening of the last day, and giving his own speech. Neither had been on the programme, and never had Oromo natives holding office in spoken so freely to an independent Oromo native gathering critical of the Ethiopian state. It was also a homecoming for many members after four-decade separations, such as among the Jalata family, whose member, the activist Professor Asafa Jalata, had been exiled in the United States.

It was triply significant for the American researcher, activist and academic Bonnie Holcomb, author of the 1991 book: The Invention of Ethiopia: The Making of a Dependent Colonial State in Northeast Africa, whose had been arrested and eventually banned from the country altogether back in the 1970s, for documenting the Oromo experience that informed the work.

She was able to finally return through this conference. In her time, she has seen the culture move from being essentially banned and demonised to nominally statutorily recognised, and the organisation she co-founded finally make its way home, to discover and connect with two generations of home-based activism.

A second major OSA goal was to generate reflection on what contemporary thinking on “Development” means for the Oromo people. This is partly because Oromo areas of Ethiopia constitute the breadbasket of the country, and as such, any objections to further development (read “eviction” and environmental destruction) projects were deemed as the thoughts of a backward people. Many native peoples can learn from this.

A new approach is needed to get beyond the crisis that five hundred or more years of dominant Western thought has now imposed upon the planet. The planet has reached a point where it may no longer be able to sustain human, and possibly other forms of especially mammalian life. Western thought’s underlying Abrahamic exhortation to “…multiply…fill the earth and subdue it…” (Genesis: 1:28) is about to kill us all.

Key to this new approach will be resetting humanity’s relationship with the rest of nature. For that to happen, humanity will have to reach deep into those areas of human knowledge hitherto marginalised and downgraded by the great White experiment, for answers. Only those peoples who, despite colonialisms and attempted genocides, still held on to their pre-Abrahamic knowledge systems or have the means of reconstructing them, can help.

The Oromo are a prime example of this.

Through their book: Sacred Knowledge Traditions of the Oromo of the Horn of Africa, essentially researched over a period of three decades, Dr Gemetchu Megerssa and Dr Aneesa Kassam have finally managed to capture the detailed outline of this thought system, aspects of which have been recognized by the United Nations Educational Scientific and Cultural Organisation (UNESCO) as part the intangible human cultural heritage.

Apart from astronomy and numerology, the Oromo offer much to learn regarding autonomous governance, democratic governance and the management of power (political authority is handed to a new age-set through elections every eight years), organic agriculture (the world-renown Boran bull species is a product of the indigenous breeding knowledge of the Booran branch of Oromo) and spiritual care.

This is a classic case of the re-definition of knowledge. The primary source for this great study was a series of initiation sessions that Gemetchu was inducted into as a young man, in search of a deeper understanding of the Oromo system. His key teacher was Bulee Gayoo. He agreed to pass on the teaching upon establishing that in fact, Gemetchu was Ruda Kura’s paternal grandson.

Apart from astronomy and numerology, the Oromo offer much to learn regarding autonomous governance, democratic governance and the management of power (political authority is handed to a new age-set through elections every eight years), organic agriculture (the world-renowned Boran bull species is a product of the indigenous breeding knowledge of the Booran branch of Oromo) and spiritual care.

Among his people, Bulee Gayyoo was an ilmaan korma, a first son born when his own father was forty years old. This meant he was “born within time”, and aligned with the Oromo Gadaa time system, giving him special responsibilities as a custodian of its knowledge.

In Kenya, he presented first as a night watchman, and then a cattle-labourer in Kariobangi market and lived in the slums of Mathare Valley, where the teaching sessions took place. He passed on in 2003. Now he lives on in the form of a deeply researched book. How much of the knowledge held by people such as him, never made this journey? How much is lost to the vanities and stricture of Western-inspired academia?

But there is more: the recovered Oromo story also offers the foundation for a greater study of the black Kushite civilizational system that gave rise to the black civilization of Khemet, better known as Ancient .

With Oromo, OSA may have found the place where the proper historical reconstruction of the actual African story may begin.

Published by the good folks at The Elephant.

The Elephant is a platform for engaging citizens to reflect, re-member and re-envision their society by interrogating the past, the present, to fashion a future.

Follow us on Twitter.

Gen Z, the Fourth Industrial Revolution, and African Universities

By Paul Tiyambe Zeleza

Four hundred years ago, in late August 1619, a slave ship named White Lion, landed on the shores of Point Comfort, in what is today Hampton, Virginia. On board were more than 20 African women and men, who had been seized from a Portuguese ship, São João Bautista, on its way from Angola to Veracruz in Mexico. Virginia, the first English colony in North America, had only been formed twelve years earlier in 1607.

Thus, the two original sins of the country that would become the United States of America, the forcible seizure of the lands of the indigenous people, and the deployment of forced labor from captive and later enslaved Africans, began almost simultaneously. The Africans were stolen people brought to build stolen lands, as I noted in the lead short story in my collection, The Joys of Exile, published in 1994.

I attended the First Landing Commemorative Weekend in Hampton, Virginia on August 23-24. Partly for professional reasons as a historian who has done extensive work on African diasporas. And partly in homage to my acquired diaspora affiliations, and the diaspora identities of some key members of my immediate family including my wife and daughter.

In the events I participated I was enraptured by the stories and songs and performances of remembrance. And I was inspired by the powerful invocations of resilience, the unyielding demands for responsibility and reparations, and the yearnings for redemption and recovery from what some call the post-traumatic slave syndrome.

The emotions of the multitudinous, multiracial and multigenerational audiences swayed with anger, bitterness and bewilderment at the indescribable cruelties of slavery, segregation, and persistent marginalization for African Americans. But there was also rejoicing at the abundant contributions, creativity, and the sheer spirit of indomitability, survival and struggle over the generations. We still stand, one speaker proclaimed with pride defiance, to which the audience beamed and chanted, “Yes, we do!” In the events I participated I was enraptured by the stories and songs and performances of remembrance. And I was inspired by the powerful invocations of resilience, the unyielding demands for responsibility and reparations, and the yearnings for redemption and recovery from what some call the post-traumatic slave syndrome.

The scholars brought their academic prowess as they methodically peeled the layers of falsehoods, distortions, and silences in the study of American history and society. They unraveled the legacies of slavery on every aspect of American life from the structure and destructive inequities of American capitalism to what one called the criminal injustice system rooted in the slave patrols of the plantations, as well as the history of struggles for democracy, freedom and equality that progressively realized America’s initially vacuous democratic ideals.

The artists and media practitioners assailed and celebrated the 400 years of pain and triumphs. They exhorted the power of African Americans telling and owning their stories. A renowned CNN pundit reminded the audience that there are four centers of power in the United States, namely, Washington (politics), (finance), Silicon Valley (digital technology), and Hollywood (media), and that African American activists have to focus on all of them, not just the first.

The politicians implored the nation to confront the difficult truths of American history with honesty and commitment. The two former governors and the current governor of Virginia paid tribute to the centrality of African American history and their role in bridging the yawning contradiction between the claims of representative democracy and the heinous original sin and exclusions of slavery. They proceeded to promise various policy remediations. Black members of Congress bemoaned the incomplete progress made in the march to freedom and inclusion and denounced the resurgence of hate, racism and white supremacy. An eleven year orator electrified the crowd with his passionate plea for fostering a community of care and kindness that would make the ancestors proud.

Two hundred and forty one years after the arrival of the first Africans in Hampton, in the summer of 1860, the last ship that brought African captives to the shores of the United States landed north of Mobile, Alabama. The Coltilda brought 110 women, men, and children. The Senegalese historian, Sylviane Diouf has told their story with her characteristic care, compassion and eloquence in her book, Dreams of Africa in Alabama.

The following year, in April 1860, the American Civil War broke out primarily over the institution of slavery. The abolition of slavery finally came in 1865. By then, hundreds of ships had plied the Atlantic and brought nearly half a million African captives to the United States. They and their descendants endured 246 years of servitude and slavery, a century of Jim Crow segregation, and another half a century of an incomplete and contested civil rights settlement.

The African men and women who landed as captives in Hampton arrived out of two confluences of pillage: in Angola and in the Atlantic. They were pawns in the imperial rivalries and internecine wars engendered by the burgeoning slave-based Atlantic economy enveloping what became the insidious triangle of western Africa, western Europe, and the Americas that emerged from the early 1500s.

But even in their subjugation they were history makers. They became indispensable players in the construction of Atlantic economies and societies. In short, their history of servitude that before long calcified into slavery, is the history of the United States of America, of the making of the modern world in all its complexities and contradictions, tragedies and triumphs, perils and possibilities.

By the time the first captive Africans arrived in Virginia, more than half a million Africans had already crossed the horrendous Middle Passage to the incipient Portuguese, Spanish, and English colonies of South America and the Caribbean. In fact, they were preceded in several parts of North America itself by Africans who came with the conquistadors from the Iberian Peninsula both in servitude and freedom. For example, the first recorded person of African descent to reach Nova Scotia, Canada in 1604 was Mathieu Da Costa, a sailor and translator for French settlers from Portugal.

It is critical to remember that the Iberian Peninsula had been conquered in 711 by northwest Africans who ruled parts of the region for eight centuries (in Eurocentric textbooks they are often referred to as Moors, Muslims, or Arabs). Later, the descendants of Africans brought as captives to Spain from the 1440s, sometimes referred to as Afro-Iberians, plied the Atlantic world as sailors, conquistadors, and laborers in the conquest and colonization of the Americas. For the United States, it appears in 1526 enslaved Africans rebelled against a Spanish expedition and settlement in what is today South Carolina.

This is to underscore the importance of placing the arrival of Africans in Virginia in 1619 in a broader historical context. Their horrendous journey, repeated by 36,000 slave ships over the centuries, was embedded in a much larger story. It was part of the emergence of the modern world system that has dominated global history for the last five hundred years, with its shifting hierarchies and hegemonies, but enduring structures and logics of capitalist greed, exploitation, and inequality. I found the broader trans-Atlantic and global contexts somewhat missing from the commemorations in Hampton.

The new world system that emerged out of the inhuman depredations of the Atlantic slave trade and slavery, and the economic revolutions it spawned, was defined by its capitalist modernity and barbarism. It involved multiple players comprising political and economic actors in Europe, Africa, and the expanding settler societies of the Americas. Scaffolding it was the ideology of racism, the stubborn original fake news of eternal African inferiority, undergirded by physiological myths about African bodies. Racism was often supplemented by other insidious constructs of difference over gender and sexuality, religion and culture.

Much of what I heard at the Commemorative Weekend and read in the American media, including the searing and sobering series of essays under “The 1619 Project” in The New York Times powerfully echoed the academic literature that I’m familiar with as a professional historian. Befitting the nation’s most prestigious paper, The 1619 Project is ambitious: “It aims to reframe the country’s history, understanding 1619 as our true founding, and placing the consequences of slavery and the contributions of black Americans at the very center of the story we tell ourselves about who we are.”

The essays paint a complex and disturbing picture of American history. One traces the shift from forced labor, which was common in the Old World, to the rise of commercialized, racialized, and inherited slavery in the Americas, and how this ruthless system generated enormous wealth and power for nation states in Europe and the colonies, institutions including the church, and individuals. As the plantation economy expanded, the codification of slavery intensified into a rigid system of unmitigated exploitation and oppression.

The new world system that emerged out of the inhuman depredations of the Atlantic slave trade and slavery, and the economic revolutions it spawned, was defined by its capitalist modernity and barbarism. It involved multiple players comprising political and economic actors in Europe, Africa, and the expanding settler societies of the Americas. Another essay underscores how the back-breaking labor of the enslaved Africans built the foundations of the American economy, how cotton became America’s most profitable commodity, accounting for more than half of the nation’s exports and world supply, which generated vast fortunes. Yet, the enslaved Africans had no legal rights to marry, or to justice in the courts; they could not own or inherit anything, not even their bodies or offspring, for they were chattel, property that could be sold, mortgaged, violated, raped, and even killed at will; and they had no rights to education and literacy.

One contributor to the series states categorically that “In order to understand the brutality of American capitalism, you have to start on the plantation.” Key institutions and models that have come to characterize the American economy were incubated on the plantation. They include the relentless pursuit of measurement and scientific accounting, workplace supervision, the development of the mortgage and collateralized debt obligations as financial instruments, and the creation of large corporations. Slavery made Wall Street, America’s financial capital. In short, slavery is at the heart of what one author calls the country’s low-road capitalism of ruthless accumulation and glaring inequalities.

But the contributions of African Americans went beyond the economic and material. Several essays discuss and applaud their cultural contributions. Music is particularly noteworthy. Much of quintessential American music exported and consumed ravishingly across the world is African American, from to blues to rock and roll to gospel to . Forged in bondage and racial oppression, it is a tribute to the creativity and creolization of diaspora cultures and communities, the soulful and exuberant soundtrack of an irrepressible people.

One could also mention the indelible imprints of African American cuisine, fashion, and even the aesthetics of cool. We also know now, through the work of African American historians and activist scholars and others, such as Craig Steven Wilder’s groundbreaking book, Ebony and Ivory: Race, Slavery, and the Troubled History of America’s Universities, that the growth of America’s leading universities from Harvard to Yale to Georgetown and some of the dominant intellectual traditions are inextricably linked to the proceeds and ideologies of slavery.

No less critical has been the massive contributions by African Americans to defining the very idea of freedom and expanding the cherished, but initially rhetorical and largely specious ideals of American democracy. Juxtaposed against the barbarities of plantation economies was the heroism of slave resistances including rebellions. It is the generations of African American struggles that turned the United States from a slavocracy (10 of the 12 first presidents were slave owners) to a democracy.

It is they who turned the ideal and lie of democracy into reality, paving way for other struggles including those for women’s, gay, immigrant, and disability rights that engulfed 20th century America and still persist. The struggles were both overt and covert, militant and prosaic, episodic and quotidian. They started among the captives enroute to the slaveholding dungeons on the coasts of western Africa, through the Middle Passage, on the plantations, and in the mushrooming towns and cities of colonial America.

The African American struggles for human rights peaked during Reconstruction as electoral offices opened to them and the 13th, 14th and 15th amendments were passed outlawing slavery, guaranteeing birthright citizenship, and the right to vote, respectively. But these advances soon triggered a backlash that ushered the racial terror of Jim Crow that reinstated the caste system of American racism for nearly a century.

After the Second World War the country was convulsed by the long crusade for civil rights that resulted in the Civil Rights and Voting Rights Acts of 1964 and 1965, respectively. But as with every victory in America’s treacherous racial quagmire, a racist counteroffensive soon erupted, which intensified during and after the historic Obama presidency. And the struggle continues today in myriad ways and venues.

The Atlantic slave trade and slavery in the Americas have generated some of the most heated debates in the historiographies of modern Africa, the Americas, Europe, and the world at large. A trading and labor system in which the commodities and producers were enslaved human beings cannot but be highly emotive and raise troubling intellectual and moral questions.

The controversies centre on several issues, five of which stand out. There are, first, fierce debates about the total number of Africans exported; second, the demographic, economic and social impact of the slave trade on Africa; third, the impact of Africans and slavery on the development of economies, societies, cultures and polities in the Americas; fourth, the role of the Atlantic slave trade and slavery in the development of industrial capitalism in the western world generally; and finally, the contentious demands for reparations for the slave trade and slavery that have persisted since abolition.

In so far as the Atlantic slave trade remains the foundation of the modern world capitalist system and the ultimate moral measure of the relationship between Africa, Europe, and the Americas, between Africans and Europeans and their descendants in modern times, the amount of intellectual and ideological capital and heat the subject has engendered for the past half millennium should not be surprising. Predictably, also, all too often many scholars and ideologues hide their motives and biases behind methodological sophistry, rhetorical deflections, and outright lies.

Many of the contemporary disputes are as old as the Atlantic slave trade itself. Two approaches can be identified in the debates, although there are considerable overlaps. There are some, especially those of European descent, who tend to minimize the adverse impact that the slave trade had on Africa and Africans on the continent and on the enslaved Africans in the diaspora. Others, mostly of African descent, tend to emphasize the role of the slave trade in the underdevelopment of Africa, development of the Americas and Western Europe, and the marginalization and reconstruction of African diaspora cultures and communities in the Americas.

The Atlantic slave trade began slowly in the 15th century, then grew dramatically in the subsequent centuries, reaching a peak in the 18th and 19th centuries. The trade was dominated first by the Portuguese in the 15th and 16th centuries, then by the Dutch in the 17th century, the British in the 18th century, and the Europeans settled in the Americas (e.g., USA, Cuba, Brazil, etc.) in the 19th century.

The bulk of the enslaved Africans came from the western coast of Africa covering the vast regions of Senegambia, Upper Guinea Coast, Gold Coast, Bight of Benin, Bight of Biafra, Congo and Angola. In short, West and Central Africa were the two major streams of enslavement that flowed into the horrific Middle Passage to the Americas.

The Atlantic slave trade was triggered by the demand for cheap and productive labour in the Americas. Attempts to use the indigenous peoples floundered because they were familiar with the terrain and could escape, and they were increasingly decimated by exposure to strange new European diseases and the ruthless brutalities and terror of conquest. And it was not possible to bring laborers from Europe in the quantities required. In the 16th and 17th centuries Europe was still recovering from the Black Death of the mid-14th century that had wiped out between a third and half of its population.

And so attention was turned to western Africa. Why this region, not other parts of Africa or Asia for that matter, one may wonder. Western Africa was relatively close to the Americas. If geography dictated the positioning of western Africa in the evolving and heinous Atlantic slave trade, economics sealed its fate.

The African captives were highly skilled farmers, artisans, miners, and productive workers in other activities for which labor was in great demand in the Americas. Also, unlike the indigenous peoples of the Americas, they were more resistant to European diseases since the disease environments of the Old World of Europe, Africa and Asia overlapped.

The bulk of the enslaved Africans came from the western coast of Africa covering the vast regions of Senegambia, Upper Guinea Coast, Gold Coast, Bight of Benin, Bight of Biafra, Congo and Angola. In short, West and Central Africa were the two major streams of enslavement that flowed into the horrific Middle Passage to the Americas.

Furthermore, the captives were stolen. Slavery entailed coerced, unpaid labor, which made both the acquisition of captives and use of slave labor relatively cheap. The captives were acquired in several ways, predominantly through the use of force in the form of warfare, raids and kidnapping. Judicial and administrative corruption also played a role by sentencing people accused of violating the rules of society and witchcraft, often capriciously, into servitude. Some were seized as a form of tribute and taxation.

Thus the process of enslavement essentially involved the violent robbery of human beings. The families of the captives who disappeared never saw them again. Thus, unlike voluntary European migrants to the Americas and contemporary migrants from Africa, the families of the captives never got anything for the loss of their relatives. There were no remittances.

And few ever saw Africa or the wider world again, except for the sailors who plied the Atlantic. The exceptions also include individuals like Olaudah Equiano, who left us his remarkable memoir, The Interesting Narrative of the Life of Olaudah Equiano. There are also the striking stories of return to Africa among some of those whose memoirs are recorded in Allan D Austin’s pioneering compendium, African Muslims in Antebellum America.

For their part, the slave dealers, from the local merchants and rulers in Africa to the European merchants at the hideous fortresses that dot the coasts of western Africa and slave owners in the Americas, shared all the ill-gotten gains of captivity, servitude, and enslavement. One of the difficult truths we have to face is the role of Africans in the Atlantic Slave trade, a subject that casts a pall between continental Africans and the historic diaspora in the Americas.

African merchants and ruling elites were actively involved in the slave trade, not because their societies had surplus population or underutilized labour, as some historians have maintained, but for profit. They sought to benefit from trading a “commodity” they had not “produced,” except transport to the coast. The notion that they did not know what they were doing, that they were “bamboozled” by the European merchants is just untenable as the view that they generated, controlled, or monopolized the trade.

To assume that African merchants did not profit because their societies paid a heavy price is just as ahistorical as to equate their gains with those of their societies. In other words, African slave traders pursued narrow interests and short-term economic calculations to the long-term detriment of their societies. It can be argued that they had little way of knowing that their activities were under- populating and under-developing “Africa,” a configuration that hardly existed in their consciousness or entered into their reckoning. However, Europe and European merchants bear ultimate responsibility for the Atlantic slave trade. It was the Europeans who controlled and organized the trade; African merchants and rulers did not march to Europe to ask for the enslavement of their people, in fact some actively resisted it. It was the Europeans who came to buy the captives, transported them in their ships to the Americas, and sold them to European settlers who used them to work on mines and plantations, and to build the economic infrastructure of the so-called New World.

Clearly, the consequences of the Atlantic slave trade varied significantly for Africa on the one hand and Europe and the Americas on the other. While much of the historiography focuses on the economic underdevelopment of Africa and the economic development of the Americas and Europe, this needs to be prefaced by the uneven and unequal demographic impact.

As noted earlier, there’s on the numbers of captive and enslaved Africans. The late American historian, Philip Curtin in his 1969 book, The Atlantic Slave Trade: A Census estimated that 9,566,100 African captives were imported into the Americas between 1451 and 1870. His followers proposed slight adjustment upwards as more data became available. In much of the western media including The New York Times’ 1619 Project, the figure that is quoted is 12.5 million.

To assume that African merchants did not profit because their societies paid a heavy price is just as ahistorical as to equate their gains with those of their societies. In other words, African slave traders pursued narrow interests and short-term economic calculations to the long-term detriment of their societies.

In a series of articles and monographs, Joseph Inikori, the Nigerian economic historian, questioned the computation methods of Curtin and his followers and the quality of the data they employed, particularly the underestimation of the slave imports of Spanish, Portuguese and French America. He suggested a 40 per cent upward adjustment of Curtin’s figures which brings the Atlantic slave exports to a total of 15.4 million, of whom about 8.5 million were from West Africa and the rest from Central Africa.

The exact number of African captives exported to the Americas may never be known, for there may be extant sources not yet known to historians or others that have been lost. Moreover, it is difficult to establish the number of captives who arrived through the clandestine or “illegal” trade, and those who died between the time of embarkation and arrival in the New World in both the “legitimate” and clandestine trade. Even harder to discern is the number of captives who died during transit to, or while at, the coast awaiting embarkation, and of those who were killed during slave wars and raids.

As I argued in my 1993 book, A Modern Economic History of Africa, the “numbers game,” is really less about statistical exactitude than the degree of moral censure. It is as if by raising or lowering the numbers the impact of the Atlantic slave trade on the societies from which the captives came and on the enslaved people themselves can be increased or decreased accordingly. There is a long tradition in Western scholarship of minimizing the demographic impact of the slave trade on Africa. It began with the pro-slavery propagandists during the time of the Atlantic slave trade itself.

There is now considerable literature that shows the Atlantic slave trade severely affected the demographic processes of mortality, fertility and migration in western African. The regions affected by the slave trade lost population directly through slave exports and deaths incurred during slave wars and raids. Indirectly population losses were induced by epidemics caused by increased movements and famines brought about by the disruption of agricultural work, and flight to safer but less fertile lands. All the available global estimates seem to agree that by 1900 Africa had a lower share of the world’s population than in 1500. Africans made up 8% of the world’s population in 1900, down from 13% in 1750. It took another 250 years for Africa’s population to return to this figure; it reached 13.7% of the world’s population in 2004. Inikori has argued that there would have been 112 million additional population in Africa had there been no Atlantic slave trade.

As I argued in my 1993 book, A Modern Economic History of Africa, the “numbers game,” is really less about statistical exactitude than the degree of moral censure. It is as if by raising or lowering the numbers the impact of the Atlantic slave trade on the societies from which the captives came and on the enslaved people themselves can be increased or decreased accordingly.

This is because the slave trade also altered the age and gender structures of the remaining populations, and the patterns of marriage, all of which served to depress fertility rates. The people who were exported were largely between the ages of 16 and 30, that is, in the prime of their reproductive lives, so that their forced migration depressed future population growth. Moreover they were lost at an age when their parents could not easily replace them owing to declining fertility.

The age structure of the population left behind became progressively older, further reinforcing the trend toward lower growth. Thus population losses could not easily be offset by natural increases, certainly not within a generation or two. The gender ratio was generally 60 per cent for men and 40 per cent for women. This affected marriage structures and fertility patterns. The proportion of polygynous marriages increased, which since it may have meant less sexual contact for women than in monogamous marriages, probably served to depress fertility as well.

The fertility of the coastal areas was also adversely affected by the spread of venereal diseases and other diseases from Europe. The Mpongwe of Gabon, for instance, were ravaged by syphilis and smallpox, both brought by European slave traders. Smallpox epidemics killed many people, including those at the peak of their reproductive years, which, coupled with the disruption of local marriage customs and the expansion of polygyny, served to reduce fertility.

Thus, for Africa the Atlantic slave trade led to depopulation, depleted the stock of skills, shrunk the size of markets and pressures for technical innovation. At the same time, violence associated with the trade devastated economic activities. It has been argued that the Atlantic slave trade aborted West Africa’s industrial take off.

It was not just the demographic and economic structures that were distorted by the slave trade, social and political institutions and values were also affected, so that even after slavery in the Americas was abolished, the infrastructures developed to supply captives for enslavement remained, and were now used to expand local labour supplies to produce commodities demanded by industrializing European economies. As the great radical Guyanese historian, , argued in the late 1960s the slave trade contributed to the expansion of slavery within Africa itself, rather than the other way round as propagated by Eurocentric historians.

The sheer scale and longevity of the Atlantic slave trade generated cultures of violence and led to the collapse of many ancient African states and the rise of predatory slave states. Thus it has been argued that the slave trade was one of the main sources of corruption and political violence in modern Africa. The political economy of enslavement tore the moral economy of many African societies. Contemporary Africa’s crass and corrupt elites that mortgage their country’s development prospects are the ignominious descendants of the slave trading elites of the horrific days of the Atlantic slave trade.

In contrast to Africa, the Atlantic slave trade and slavery in the Americas became the basis of the Atlantic economy from the 16th until the mid-19th century. It was the world’s largest and most lucrative industry. The crops and minerals produced by the labor of enslaved Africans such as sugar, cotton, tobacco, gold and silver were individually and collectively more profitable than anything the world had ever seen. This laid the economic foundations of the Americas, and the economic development of Western Europe more broadly.

Inikori argues persuasively in his award winning book, Africans and the Industrial Revolution in England, that Africans on the continent and in the diaspora were central to the growth of international trade in the Atlantic world between the 16th and 19th centuries and industrialization in Britain, the world’s first industrial nation, and the leading slave trading nation of the 18th century. As Europe became more industrialized it acquired the physical capacity, as well as the insatiable economic appetite, and the ideological armor of racism to conquer Africa.

Thus, the colonial conquest of the late 19th century was a direct outcome of the Atlantic slave trade. Instead of exporting captive labor, the continent was now expected to produce the commodities in demand by industrializing Europe and serve as a market for European manufactures, and an investment outlet for its surplus capital.

There can be little doubt the Atlantic slave trade and enslaved Africans laid the economic, cultural, and demographic foundations of the Americas. It is often not well appreciated that it was only with the end of the slave trade that European immigrants, whose descendants now predominate in the populations of the Americas, came to outnumber forced African immigrants to the Americas.

For the United States the median arrival date of African Americans—the date by which half had arrived and half were still to come—is remarkably early, about 1780s. The similar median date for European Americans was remarkably late—about the 1890s. In short, the average African American has lived far longer in the United States than the average European American.

As Walter Rodney showed in his 1972 provocative classic, How Europe Underdeveloped Africa, which became the intellectual bible for my generation of undergraduates hungry to understand why Africa remained so desperately poor despite its proverbial abundant natural resources, slave labor built the economic infrastructure of the Americas and trade in produce by slave labor provided the basis for the rise of manufacturing, banking, shipping, and insurance companies, as well as the formation of the modern corporation, and transformative developments in technology including the manufacture of machinery.

There can be little doubt the Atlantic slave trade and enslaved Africans laid the economic, cultural, and demographic foundations of the Americas. It is often not well appreciated that it was only with the end of the slave trade that European immigrants, whose descendants now predominate in the populations of the Americas, came to outnumber forced African immigrants to the Americas.

The contributions of captive and enslaved Africans are greater still. African musics, dance, religious beliefs and many other aspects of culture became key ingredients of new creole cultures in the Americas. This makes the notion of the Americas as an autogenic European construct devoid of African influences laughable. The renowned Ghanaian-American philosopher, Kwame Anthony Appiah, correctly urges us in his book, The Lies That Bind: Rethinking Identity to give up the idea of the West and and the attendant vacuous notions of western civilization and western culture, which are nothing but racially coded euphemisms for whiteness.

The Americas including the United States have never been, and will never be an exclusive extension of white Europe, itself a historical fiction, notwithstanding the deranged fantasies of white supremacists. Brazil, the great power of South America tried a whitening project following the belated abolition of slavery in 1888, by importing millions of migrants from Europe, but failed miserably. Today, Afro-Brazilians are in the majority, although their evident demographic and cultural presence pales in comparison to their high levels of socioeconomic and political marginalization.

The Atlantic slave trade, the largest forced migration in world history, had another pernicious legacy that persists. It may not have created European racism against Africans but it certainly bred it. As Orlando Patterson demonstrated in his magisterial 1982 study, Slavery and Social Death: A Comparative Study, before the Atlantic slave trade began slavery existed in many parts of the world and was not confined to Africans. Indeed, studies show in 1500 Africans were a minority of the world’s slaves.

The tragedy for Africa is that the enslavement of Africans expanded as the enslavement of other peoples was receding. By the 19th century slavery had become almost synonymous with Africans, so that the continent and its peoples carried the historical burden of and contempt accorded to slaves and despised social castes and classes. In short, it is this very modernity of African slavery that left Africans in the global imaginary as the most despised people on the planet, relegated to the bottom of regional and local racial, ethnic, and color hierarchies.

This has left the scourges of superiority complexes by the peoples of Europe and Asia against Africans and inferiority complexes among Africans and peoples of African descent in the diaspora. This sometimes manifests itself in obsessive colorism that can degenerate into mutilations of the black body through skin lightening and other perverted aspirations for whiteness.

It is also evident in inter- and intra-group antagonisms in diaspora locations between the new and historic African diasporas, between recent continental African migrants and African Americans so painfully and poignantly captured in the documentary film by Peres Owino, a Kenyan-American film maker, Bound: Africans vs African Americans. The documentary attributes the antipathies, antagonism, and anxieties that shape relations between the two groups to lack of recognition of the collective traumas of each other’s respective histories of slavery and colonialism.

The Atlantic slave trade and slavery left legacies of underdevelopment, marginalization, inequality, and trauma for Africans and African diasporas. This has engendered various demands for restitution and redemption. Demands for compensation to the descendants of the enslaved Africans in the Americas and Europe have been going on from the time of the abolition of slavery in the Americas captured in the United States in the prosaic claim for “forty acres and a mule.”

In the United States, Representative started the reparations campaign in Congress from 1989. Every year he introduced a bill calling for the creation of a Commission to Study Reparation Proposals for African Americans. Not much had been achieved by the time he retired in 2017. But in the interim seven states proceeded to issue apologies for their involvement in slavery (Alabama, Delaware, Florida, Maryland, , North Carolina, and Virginia). Some private institutions followed suit, such as JP Morgan Chase and Wachovia, so did a growing number of universities such as Georgetown.

Claims for reparations found a powerful voice among some influential African American intellectuals and activists. One was the founder of the lobbying organization, Trans-Africa, who made a compelling case in his book, The Debt: What America Owes to Blacks. In 2017, the incisive commentator, Ta-Nehisi Coates reignited the national debate with a celebrated essay in The Atlantic magazine, “The Case for Reparations.”

In 2009, shortly after President Obama assumed office, the US Senate unanimously passed a resolution apologizing for slavery. The United Nations Working Group of Experts on People of African Descent encouraged the to look into the issue of reparations. But Opposition to reparations remained among the majority of Americans; in a 2014 survey only 37% supported reparations.

In the charged political season of 2019 and the forthcoming presidential elections of 2020, reparations has risen to the national agenda as never before. Several leading Democratic Party presidential candidates (, Cory Booker, Tulsi Gabbard, Bernie Sanders, and Beto O’Rourke) have openly embraced the reparations cause. In the meantime, the reparations debate seems to be gathering momentum in more private institutions including universities buoyed by the unveiling of some universities’ links to slavery, the radicalizing energies of the BlackLivesMatter movement, and mounting resistance to resurgent white supremacy.

The Caribbean region boasts one of the most vibrant reparations movements in the Americas. This can partly be explained by the fact that the demands are not directed to the national government as in the United States, but to Britain the former leading slave trading nation and later colonial power over some of the Caribbean islands. Also, the Caribbean enjoys a long tradition of Pan-African activism.

The call by Caribbean leaders for European countries to pay reparations became official in 2007 and was subsequently repeated by various heads of state in several forums including the United Nations. Hilary Beckles became the leading figure of the Caribbean reparations movement (he is a former colleague of mine at the University of West Indies where we both joined the History Department in 1982 and where he currently serves as Vice Chancellor). In 2013, he published his influential book, Britain’s Black Debt: Reparations for Caribbean Slavery and Native Genocide. In 2013, the CARICOM (Caribbean Community) Reparations Commission was created.

In Europe, the reparations movement has been growing. Black British campaigns intensified and reached a climax in 2008 during the 200th anniversary of the British abolition of the slave trade. In 2007, Prime Minister Tony Blair and London Mayor Ken Livingstone offered apologies for Britain’s participation in the Atlantic slave trade.

In 2017, the Danish government followed suit and apologized to Ghana for the Atlantic slave trade. But apologies have not found favor in countries such as Portugal, Spain, and France that participated actively in this monumental business of human trafficking. But even for Britain and Denmark reparations have not made much headway.

African states have exhibited a conflicting attitude towards reparations. On the one hand, they have shown eagerness to call on the Atlantic slave trading nations of Europe and slave holding societies of the Americas to pay reparations to Africa. The African World Reparations and Repatriation Truth Commission established in 1999 put the figure at a staggering $77 trillion. At the global level, the issue of reparations was a major subject at the 2001 UN World Conference against Racism, Racial Discrimination, Xenophobia and Related Intolerance held in Durban, South Africa.

In 2010, the renowned Harvard scholar, Henry Louis Gates, published an essay in The New York Times in which he raised the thorny question of whether reparations should be extracted from Africans who were involved in the Atlantic slave trade. Few African leaders have been prepared to apologize for their societies complicity in the slave trade. In 1999 the President of Benin was among the first to apologize to African Americans. Ghana followed suit with an apology to African Americans in 2006. In January 2019, Ghana’s President Nana Akufo-Addo declared 2019 “The Year of Return” to mark the 400th anniversary of the arrival of the first captive Africans in Hampton, Virginia.

The responsibility for the Atlantic slave trade falls on the shoulders of many state and elite actors in Africa, Europe, and the Americas. The major benefits of slavery in the Americas accrued to the elites and states in the Americas and Europe. This suggests differentiated levels of responsibility for reparations and redemption. African governments in the regions involved in the Atlantic slave trade must seek the redemption of apology to the historic African diasporas in the Americas through the regional economic communities and the African Union.

Only then can the process of healing and reconciliation for the sons and daughters of Africa on both sides of the Atlantic begin in earnest. Acknowledgement and mutual recognition between Africa and its diasporas should be sustained through the transformative power of education. Teaching the history of the Atlantic slave trade, slavery in the Americas, and the contributions of the historic African diasporas must be incorporated in the curriculum at every level across the continent.

Deliberate efforts must also be made by African governments and institutions to facilitate and promote multidimensional engagements with the historic diaspora. The designation of the diaspora by the African Union as Africa’s sixth region must be given teeth in terms of political, economic, social and cultural rights.

But the charge goes beyond governments. The private sectors and civil societies in African nations and the diaspora must also establish mutually beneficial and empowering modalities of engagement.

There are encouraging signs of new intellectual and artistic bridges being build by the new African diaspora, who straddle in their upbringing, identities, experiences, and sensibilities the sociocultural geographies and political ecologies of continental Africa and diaspora America. A few examples will suffice.

There’s no better accounting of the divergent yet intimately connected histories between Africa and America from the 18th century to the present than Yaa Gyasi’s sprawling and exquisite first novel, Homegoing. It tells the story of two sisters, one who was sent into slavery and the other who remained in West Africa, and the parallel lives of their descendants. Another skillful exploration and painful reckoning with slavery can be found in Ayesha Harruna Attah’s The Hundred Wells of Salaga set in a bustling slave trading market for the Atlantic slave trade.

African governments in the regions involved in the Atlantic slave trade must seek the redemption of apology to the historic African diasporas in the Americas through the regional economic communities and the African Union.

Recounting the travails of an enslaved African traversing across the expanse of the black Atlantic is Esi Edugyan’s soaring story in her novel, Washington Black. Coming to the contemporary African migrants, there is Imbolo Mbue’s Beyond the Dreamers set in New York that captures the aspirations, anxieties, agonies, assaults, and awakening by the new diaspora to the routine hypocrisies, hardships, harassments, and opportunities of American life.

For me, my commitments to the project of reconnecting Africa and its global diasporas in truly transformative and mutually beneficial ways provide the inspiration behind my research work on diaspora histories that I’ve been engaged in for the past two decades. This work led to the establishment of the Carnegie African Diaspora Fellowships Program that facilitates the engagement of African born academics in Canada and the United States with universities in six countries (Ghana, Nigeria, Kenya, Tanzania, Uganda, and South Africa). The program is being expanded into the Consortium of African Diaspora Scholars Programs that seeks to promote flows between scholars from both the historic and new diasporas from anywhere in the world to anywhere in africa.

As I left the Commemorative Weekend in Hampton to fly back to Kenya last night, I was filled with deep sadness at what our brothers and sisters have had to endure over the last 400 years of their sojourn in the United States, but also with immense pride in what they have been able to achieve against all odds. Let me put it graphically, as I did at a training seminar recently for African diplomats: in 2017, the 40-odd million African Americans had a purchasing power of $1.2 trillion compared to $2.2 trillion for the 1.2 billion Africans on the continent. If African Americans were a country they would be the 17th richest country in the world, richer than Nigeria, South Africa and Egypt combined.

Surely, the continent with its abundant human and natural resources can do better, much better. Africa and the diaspora owe each other principled, not transactional, solidarity if we are to navigate the complex and unsettling demands and disruptions of the 21st century better than we fared during the last half millennium characterized by the disabling histories of slavery, Jim Crow segregation, and white supremacy backlashes in the United States, and colonialism, neocolonialism, and postcolonial authoritarianisms in Africa. To echo Kwame Nkrumah’s mid-20th century dream, let’s strive to make the 21st century truly ours!

Published by the good folks at The Elephant.

The Elephant is a platform for engaging citizens to reflect, re-member and re-envision their society by interrogating the past, the present, to fashion a future.

Follow us on Twitter.

Gen Z, the Fourth Industrial Revolution, and African Universities

By Paul Tiyambe Zeleza For generations now in most of the world, cannabis has been a prohibited substance, one often vilified as a noxious bringer of addiction. Yet change is coming fast. Several states have already amended statute books to soften laws relating to cannabis (whether through allowing its medicinal use, decriminalising its possession and use, or full-blown legalisation), and many others are considering amendments. Age-old consensus on the substance has cracked, although many remain deeply opposed to any push to “free the weed”.

In Kenya, debate has grown strong too, driven by, among others, the late Ken Okoth, the MP for Kibra, who pushed for a bill legalising and regulating the substance before his sad passing. This article traces the history of this controversial substance and policy towards it, with a particular focus on Africa, and looks at the likely impact – good and bad – as a botanical outlaw is increasingly rehabilitated.

Beginnings

Cannabis, also known as marijuana, has long been used by humans as medicine, food (its seeds are highly nutritious, as is the oil derived from them), and importantly as fibre. Long before most Europeans were even aware of the psychoactive properties of this plant, cannabis was the major source of fibre used to make the rope and rigging that powered navies in the era of European imperial expansion.

Rather than bringing to mind this marine history, however, for most people around the world, the name cannabis conjures up images of a haze of psychoactive smoke emanating from the mouths of such legendary “stoners” as Bob Marley, and Fela Kuti. It also conjures up the characteristic leaves of the cannabis plant – odd-numbered combinations of serrated spears that have become symbolic not just of cannabis culture but a much wider culture of defiance.

Even the taxonomy of the plant is controversial, as researcher Chris Duvall (author of a new book, The African Roots of Marijuana) has shown. An orthodox theory holds that there is one species – Cannabis sativa – that has been cultivated and used in different ways: for fibre, for food and for its psychoactivity. Such a theory has suggested a racialised view of cannabis usage – that industrious Europeans built great seafaring empires out of hemp, while other people used it to get high. Cannabis, also known as marijuana, has long been used by humans as medicine, food, and importantly as fibre. Long before most Europeans were even aware of the psychoactive properties of this plant, cannabis was the major source of fibre used to make the rope and rigging that powered navies in the era of European imperial expansion.

However, a two-species theory – that there is Cannabis sativa more suited to producing fibre, and Cannabis indica more capable of psychoactivity – gives a more accurate botanical view of why cannabis is valued in different regions for different purposes: sativa and indica varieties simply grew in different climates, the latter more at home in warmer regions.

Whatever the taxonomic truth, cannabis originated in Eurasia, and palaeobotanical evidence suggests that people were already making use of cannabis as far as East Asia 12,000 years ago, though in what ways is now impossible to discern. It seems likely that cannabis was being farmed in East Asia 6,000 years ago, while Koreans appear to have been making fabric from it around 5,000 years ago.

But people have also long been aware of the psychoactive qualities of cannabis, and a burial site 2,700 years old in northwestern China has preserved a large cache of potent cannabis, possibly for ceremonial or shamanic use. In South Asia, there is also a long history of cannabis usage for fabric and for intoxication, a distinction emerging in Sanskrit between sana and bhanga, the former a source of plant fibre, the latter a source of intoxication and medicine. Bhang, of course, is now a widely dispersed term (in East Africa too) used for intoxicating cannabis.

This plant and its usage then took many different routes around the world. These routes owed much to a number of maritime and overland trade networks that have transported cannabis and its cultures of use. Around 5,000 years ago, cannabis was projected westwards as far as Egypt through overland trade linking to Mesopotamia and beyond, while Indian Ocean trade networks brought cannabis to East Africa’s coastline, where it has had a presence for at least a thousand years. From there it spread inland and into many different African cultures of consumption, the use of the term bhang in much of the region suggestive of its Indian Ocean network origins, although many local terms suggest possible multiple routes of entry.

The Atlantic slave trade was another vector of its spread; slaves departing from the Angolan coast sometimes carrying cannabis seeds, which led to its spread in Brazil. Another vector in its spread has been war, its popularity in West Africa owing much to the return of soldiers who had been fighting in Asia during World War II and were exposed to its consumption there. In Europe, the use of cannabis for intoxication purposes was initially an elite pursuit of Bohemians in the nineteenth century, the likes of Baudelaire popularising experimentation with the drug in an age of intense European intellectual interest in “exotic” mind-altering substances that also included opium.

While cannabis has many different cultures of consumption, there has been something of a globalisation of its appeal over the twentieth century, especially through its link to various types of music. Long associated with jazz in the US, cannabis’ popularity was also boosted by musicians such as Bob Marley and Fela Kuti. For Fela Kuti, cannabis had much symbolism as a symptom of defiance against authority, and this has long been a core part of the herb’s appeal for many consumers within various countercultures.

Much of this aura of defiant cool derives from the fact that for over a century cannabis has itself been an outlaw, as both internationally and nationally many jurisdictions have prohibited the production, trade and use of this controversial plant. Yet these prohibitions are now under threat as never before, as even countries that have long fought and promoted the “war on drugs”, such as the United States, are experimenting with various forms of decriminalisation and legalisation, while other countries still try and hold firm against calls for legislative change.

Regulating the herb

For as long as mind-altering substances have been used by humans, attempts to regulate their use have likely been used. Whether alcohol, opium or cannabis, the psychoactive qualities of such substances mean that they are usually viewed with great ambivalence – substances that can ease worries and bring pleasure, yet also bring harm and danger. Such ambivalence has spurred efforts to restrict access to those seen as able to use them responsibly, or to forbid their use completely.

While cannabis has many different cultures of consumption, there has been something of a globalisation of its appeal over the twentieth century, especially through its link to various types of music. Long associated with jazz in the US, cannabis’ popularity was also boosted by musicians such as Bob Marley and Fela Kuti.

The widespread claim that historically East African societies restricted access to alcoholic beverages and khat to elders reflects concerns over youthful drinking and chewing. It also suggests that similar types of restrictions and regulations might have been in place for cannabis in East Africa and elsewhere.

However, the formal prohibition of cannabis is mostly a twentieth-century story, albeit with a number of precursors, including the Merina king Andrianampoinimerina prohibiting it in the late eighteenth century in Madagascar on the grounds that it made his subjects “half-witted”. Its prohibition story links to that of opium, and the growing international calls for its regulation and prohibition that grew strong after the nineteenth-century Opium Wars where the British compelled China, through force, to accept imports of opium from India in the interests of their Imperial economy.

Unease with the free trade in opiates led to the International Opium Commission conference in 1909 in Shanghai, and later the International Opium Convention that called for controls and restrictions of the trade in opiates and cocaine was signed in 1912. This marked the start of the internationalisation of drug control. Cannabis was not added to these conventions until 1925 when, at the request of Egypt, cannabis was added to the conventions and its exports restricted. Subsequent conventions (including the 1961 Single Convention on Narcotic Drugs) further globalised attempts to suppress a growing range of psychoactive substances, including cannabis.

This international story of drug conventions and cannabis prohibition played out differently in various countries, the US history of marijuana prohibition and its link to characters such as Harry Anslinger of the Federal Bureau of Narcotics being the most familiar. Historians such as Isaac Campos and Jim Mills have also analysed the equally fascinating history of cannabis policy in Mexico, India and the UK.

In African countries, most state laws and policies proscribing the use, trade and production of cannabis, opiates and cocaine first emerged during the colonial period, particularly in the 1920s, though in some colonial states, these laws were put on the statute books even earlier. The major mind-altering substances of interest to African and colonial officials before then had been alcoholic drinks, as well as kola nuts and khat. The lucrative kola trade had been regulated and taxed since the end of the eighteenth century by states administering foreign trade, such as the Asante Kingdom in today’s Ghana. Alcohol use had been prohibited in many of Africa’s Muslim societies for long and became the subject of intense international debates and domestic control at the end of the nineteenth century. In particular, the trade and production of distilled spirits became the target of state regulation at that time.

African control efforts on cannabis, opiates and cocaine generally commenced only after the national and international debates on distilled spirits had become quiet. In 1927 the first Nigerian Dangerous Drugs Ordinance restricted the use and trade of cannabis, opium and coca products to medical and scientific purposes and put them under the supervisory powers of the chief medical officer of the colony. The law made the unlicensed use and trade in these drugs a crime.

In African countries, most state laws and policies proscribing the use, trade and production of cannabis, opiates and cocaine first emerged during the colonial period, particularly in the 1920s, though in some colonial states, these laws were put on the statute books even earlier.

In Kenya there is an earlier history. An Opium Regulations Ordinance was put in place in 1902. This was intended to restrict the import and production of opiates to permit holders, and sales were restricted to the discretion of medical officers. “Opium” included a wider range of substances, including “bhang”, the main term used in East Africa then and now for cannabis. This ordinance had little teeth, and pressure grew from colonial officers in western Kenya (where much cannabis was grown and consumed) for possession to be outlawed too and harsher penalties introduced for those producing or trading such substances without permits. This pressure in part led to the Abuse of Opiate Ordinance in 1913 that attempted to eradicate illicit consumption of not just opium, but a range of opiates, as well as cocaine and cannabis.

The Kenya colony and its opiate ordinances apart, drug ordinances did not usually grow out of colonial anxieties about these drugs’ threats to health or a paternalistic concern to “protect Africans” from foreign substances, as had been the case with distilled spirits. In South Africa, debates on the use and control of opium were also closely tied to the growing gold mining industry in the Transvaal as it was feared to decrease the productivity of South Africa’s workforce. In 1923 the South African government even urged the League of Nations to classify cannabis as a dangerous substance requiring international control.

In effect, most African drug laws were based on colonial blueprints, such as the Hong Kong Drug Ordinance, which was circulated among British colonial governments in the 1920s. These laws often preceded local concern with cannabis, opiates and cocaine and served more to satisfy the legal obligations of governments under new international laws, such as the 1925 and 1931 Geneva Opium Conventions. In the course of the first half of the twentieth century, most African colonies were therefore signed up to a range of international treaties on drug control, without there being much of a local concern or debate about the laws transposed into domestic legal codes, except for the case of Kenya and South Africa.

This situation changed somewhat by the late 1950s and early 1960s, when most African countries gained political independence. This period coincided with the wider use and growing public concern about cannabis and saw the first effective government policies on cannabis. In West Africa, concern was driven by medical professionals who encountered cannabis-smoking ex-soldiers among their patients. Doctors, such as Thomas Adeoye Lambo, Africa’s first Western-trained psychiatrist, started exploring Africa’s new drug and addiction problems in their research and public speeches.

Cannabis addiction also became a key discussion point at the newly founded Pan-African Psychiatric Congress and its African Journal on Psychiatry (Lambo 1965; Lewis 1975). This new medical and also media interest in cannabis led to important policy changes in some countries, such as Ghana and Nigeria. In the latter, a coup d’état brought a group of reform-minded soldiers to power who aimed to address cannabis use with the draconian Indian Hemp Decree of 1966 shortly before the country slid into a civil war.

Cannabis thus became firmly embedded in the statute books of most African nations. However, this legal uniformity belied continuing ambivalence towards the substance. Legality or illegality, of course, rarely perfectly matches societal attitudes, and many continued to view the substance positively in various ways, including as a traditional medicine, and as a recreational substance associated with popular figures such as Bob Marley and Fela Kuti. Furthermore, its illegality only further increased its reputation as a symbol of defiance against authority. For many, cannabis law has little legitimacy – or power, given the lack of state capacity to police it effectively – and it has grown to be a vital part of the rural and urban economy in much of Africa. On the other hand, many, for social, cultural or religious reasons, have bought into the idea of cannabis as socially and medically harmful and something that should be restricted.

African debates

In such a cultural climate, legalisation or decriminalisation campaigns were unlikely to take root beyond the margins. Indeed, in an earlier book we suggested that debate on drug policy had yet to take off in most African countries (2012). Yet things appear to be changing, as the impact of policy change even in parts of the USA – long the leader in the “War on Drugs” – has global repercussions.

On a more regional level, the activities of organisations like the West African Drugs Commission have also expanded the narrative away from a simple focus on repressive supply-side policy in relation to drugs of all types. In East Africa too there are moves towards alternative “harm reduction” policies, especially in regard to heroin use in cities like Dar es Salaam and Mombasa, and more recently also in . In Africa, as elsewhere, the international consensus around drug policy is fracturing, especially in regard to cannabis.

Since 2011 in Cape Town an annual cannabis march has been held that has increased markedly in popularity, symbolising the seismic changes occurring in cannabis legislation in South Africa, perhaps the African country with the strongest drug counter-culture. As with parts of the USA, permitting medical use of cannabis appears the first step in this process, and South Africa is developing provision in this regard. In addition to this, a recent court case in the Western Cape has raised hopes further that legalisation is around the corner. Several activists (including those from the “Dagga Party”, dagga being the common South African term for cannabis) brought a case “seeking a declaration that the legislative provision against the use of cannabis and the possession, purchase and cultivation of cannabis for personal or communal consumption is invalid”.

In March 2017, the court ruled that there should be a stay of prosecutions for possession of small quantities of cannabis and use of cannabis in private settings, and gave the government 24 months to amend the law in this regard. On 18 September 2018, South Africa’s Constitutional Court confirmed this judgement and thus made the growing and use of cannabis for private use legal with immediate effect, although the exact implementation of the decision is yet unclear. While there are no doubt many more hurdles to overcome for the campaigners (most prominent of whom are a white couple known as the Dagga Couple), many are already eyeing a potential legal market for cannabis in South Africa, leading some to fear the predation of corporate interests.

Since 2011 in Cape Town an annual cannabis march has been held that has increased markedly in popularity, symbolising the seismic changes occurring in cannabis legislation in South Africa…

Elsewhere too, there are increasing signs of shifting policy. Linked to the change in South Africa, Lesotho, a major supplier of illegal cannabis to the South African market, has recently given a licence to a South African firm to cultivate medical cannabis. Malawi, another major cultivation country of illegal cannabis, is also moving towards a legal hemp industry. While hemp consists of non-psychoactive varieties of the cannabis plant, even this move required overcoming resistance in Malawi’s National Assembly to an initiative based around so infamous a plant. Ghana, ranked the country with the highest rates of cannabis consumption in Africa, is also seeing rising debate on cannabis policy, and even calls for a cannabis industry to be established to take advantage of legal opportunities around the world. Debate seems more muted in Nigeria, a country with some of Africa’s harshest drug laws, although the debate is gaining ground there too.

In East Africa, debate is also increasingly conspicuous in news reports and in the wider media, especially in Kenya. There, calls for full legalisation have recently been made, including by Ken Okoth, and by political analyst Gwada Ogot, who took a petition for legalisation to the Kenyan senate. Okoth argued for Kenya to benefit economically from an export market for cannabis, suggesting that the “government should stop wasting money on sugarcane farming and legalise marijuana instead”. Ogot focused more strongly on the medicinal benefits of cannabis, and sought in his petition to have cannabis removed from the list of scheduled substances, and for the establishment of a regulatory body to oversee a legal market. He argued that: “The plant is God’s gift to mankind just as the many minerals he has put in store for Kenyans. The banning was purely for commercial interests with pharmaceutical firms seeking to control the medical industry during the first and second world wars.”

This petition was debated in the Senate, Kenya’s upper house of Parliament, in February 2017, where it garnered much interest in the media. While the debate in the Kenyan Senate was somewhat inconclusive, and decriminalisation is unlikely, at least in the near future, that such a petition was heard at all marks a shift. Debating the issue confers at least some legitimacy on a topic that many Kenyans recently would either have found shocking or comical.

What all these debates and apparent moves to different policy suggest is that the issue is a live one in African countries. However, it seems likely that the debate will gain more traction in some countries than in others and we should be cautious in generalising across such a diverse continent. In many countries there are so many other more pressing issues than cannabis, that it is unlikely to garner sufficient attention. Indeed, pushing through legislative change will require much energy and resources. For this reason, some might see legalisation as fine for rich countries like the USA with greater capacity to cope with the consequences, but hardly sensible in countries with so many other challenges.

As we have seen, economic reasoning appears to be underlying some of the push to liberalised policy, with some eyeing lucrative futures based around a cannabis industry. Economic interests have, of course, long been important in policy debates around psychoactive substances, with governments often balancing tax and other forms of revenue against medical and social harms with substances like tobacco and alcohol. And historians and anthropologists alike have emphasised the importance of analysing drugs as commodities.This has certainly been true in the case of alcohol, but also in the case of khat. Like cannabis, khat’s harm potential is ambiguous, allowing governments to justify both restricting it and developing a market for it. In the case of khat, producer countries like Ethiopia and Kenya have long resisted making the substance illegal, even if governments have been suspicious of the substance. In relation to cannabis, we can see how in countries like Lesotho and Malawi, where the cannabis industry forms a major proportion of the national economy, the temptation to make the crop legitimate and boost national coffers might be attractive. A country like Kenya, on the other hand, cultivates cannabis, but not to the same scale. It forms only a minor part of the economy, and is unlikely to garner a strong export market anyway.

It seems possible that where economic logic is not an especially pressing factor, the political will to change cannabis policy is less likely to materialise. In countries like Kenya, concern with the harmful effects of cannabis, as well as the cultural conservatism of many in government and in the general population, will form a substantial roadblock in the way of reform. In fact, Kenya has recently banned shisha smoking on health grounds, suggesting that in terms of policy the predominant logic is still one of restricting rather than liberalising. Yet change in relation to cannabis law is coming thick and fast, and given growing support amongst the political class, change cannot be ruled out in countries like Kenya. Indeed, the sad passing of Okoth has encouraged others to follow his example in calling for such change.

That change to cannabis law in Kenya is now being considered might prove a strong legacy to the memory of Okoth. Stronger yet would be taking seriously too his calls for a properly regulated market, one that would offer protections to the vulnerable, and ideally protect against predation from corporate interests.

Cannabis has provided many smallholder farmers with livelihoods – albeit illicit ones – throughout much of Africa and elsewhere in the world. As Chris Duvall argues, African cannabis farmers have done much innovation in its cultivation even under the cover of illegality. It would be a shame if legality means that powerful interests move in to seize the fruits of this innovation.

Published by the good folks at The Elephant.

The Elephant is a platform for engaging citizens to reflect, re-member and re-envision their society by interrogating the past, the present, to fashion a future.

Follow us on Twitter.

Gen Z, the Fourth Industrial Revolution, and African Universities By Paul Tiyambe Zeleza

Corruption is a political vernacular in Kenya today, as Keguro Macharia once described it. In public discourse, corruption is described as something both pervasive and cunning. You get a sense of this in some of the metaphors that we use to describe it: “cartels fight back” or corruption is a “virus” or a “cancer” that has taken over the country’s body politic. The metaphor of cancer is a particularly intriguing one, as it simultaneously renders corruption as invisible, powerful and biological.

The effect of this metaphorising of corruption is that it makes the phenomenon difficult to conceptualise concretely. It has suffered an unfortunate definitional flattening, a slippage which this article will attempt to address. We need the definition of corruption put back into sharp focus so that its contours, its peaks and troughs, are clearly accentuated. Only then can we begin to figure out a way through this moral morass.

First, the misconceptions. What is corruption? Is it that young girl in Machakos praying that her brother, a police constable, will finally be deployed to a route rich enough in opportunities for extortion so that she can finally finish her course at the University of Nairobi? She has already lost a year. Is this girl, or her brother, corrupt?

Is corruption the kickbacks that a state employee extracts from suppliers to supplement his income, which has been stretched to breaking point by a clan of dependents? What about paying extra to bump your loved one up a surgery waiting list at a public hospital?

Or is corruption something more formally executed, using laws, imprests and tenders? Is it the transfer of taxes to private use to fund a legislator’s trip to get cancer treatment abroad — over and above his taxpayer-funded premium health insurance scheme? Or is it the transfer of strategic national resources, like oil blocks, mining licences and public utilities, into private hands?

What makes one describe a traffic police officer taking a Sh50 bribe from a matatu driver as corruption but the collusion by state officers to incur public debt at the scale of an entire country’s GDP as macro-economic management? And what makes the transfer of government trustee land into foreign private hands “Foreign Direct Investment”, while the same transaction by a native is termed as a “land grab” i.e. corruption? Is it the scale of the land in question, such that acquiring 100,000 hectares is an “investment” but ten hectares is a “land grab”? Or does it have something to do with power, where the passport of a (former) imperial power paves the way for the individual holding the said passport signalling to local elites that this is the master’s son and we better make room for him or else prepare for violent expropriation and/or occupation?

Where is the line? What are the criteria? Who decides? Better yet, who should decide?

Let us turn to some linguistic definitions.

Corruption is archaically and simply defined by the Oxford dictionary as “a state of decay or putrefaction”. But today’s use of the term has redefined this term and given it a completely different meaning, just like other modern-day terms that are used primarily in their conceptual rather than linguistic meaning, including Terrorism, Extremist, Human Rights, Black, White, etc.

So, what is corruption in its politically loaded sense? Yasmin Dawood, in her classic paper “Classifying Corruption”, captures it as follows:

“Scholars have categorized various kinds of corruption. Thomas Burke has distinguished three kinds of corruption: quid pro quo, monetary influence, and distortion. Zephyr Teachout has identified five categories: criminal bribery, inequality, drowned voices, a dispirited public, and a lack of integrity. Deborah Hellman has described three principal kinds of corruption: corruption as the deformation of judgement, corruption as the distortion of influence, and corruption as the sale of favours.”

Clearly, corruption is a loaded term that can be unpacked extensively. But the single and most broadly used definition of corruption is articulated by Jakob Svensson as follows: “The misuse of public office for private gain.”

Most of the literature available on corruption engages the problem from the level of its outcomes – in other words, on its results, i.e. “private gain”. But there seems to be little effort at trying to analyse the root causes.

Given the widespread nature of corruption, and given that it has no definitive social differentiators like class, income, education level or geographical region, it follows that the possible causes are either intrinsic (related to the nature of man or woman) or systemic (attributable to the prevailing socio-political and economic environment).

The intrinsic motivation can be explained in a general sense by human beings’ deeply instinctive proclivity to possess. In short: greed. That is simple enough.

But looking at all the scholarly definitions of corruption, what we witness in Africa defies any of those classifications completely.

Nigeria’s Sani Abacha did not benignly “misuse public office for private gain”; he swept the treasury coffers clean. He decimated the public office. Kenya’s president from 1978 to 2002, Daniel arap Moi, and his coterie of ministers did not “quid pro quo, influence or distort”; they ingested entire institutions.

The same goes for the successive administrations of Mwai Kibaki and Uhuru Kenyatta. Their voracious troops of army ants have left no tree standing, literally. No tree in the forest was considered too sacred to spare. Their appetite is unlimited. Nothing was/is too big or too small, not the strategic grain reserve, not the sports kitties (one of the few remaining routes out of poverty), not even programmes like the Youth Fund, the National Youth Service or Kazi kwa Vijana. (As a resident of Eastlands I can attest that these programmes had a direct impact on the ground, with all the associated positive results of wealth redistribution e.g. drastic drops in crime, increase in economic activity, rise in optimism.)

None of this is corruption, not by a mile. It is institutional cannibalism.

Given that it is our own, our best and brightest sons and daughters who are cannibalising our own institutions, I will go further and describe it as a socio-pathological condition that I am calling autosarcophagy – an amalgam in Greek of “eating one’s own flesh.”

This makes it a sociological condition. Why have we failed to evolve from a primitive agrarian society? It is obvious from observation that the trappings of civilization were quickly pasted upon us and this is perhaps why irrespective of our level of education, we struggle to perceive institutions. A native administrator perceives the institutional resources through a primitive instinctive lens rather than through an evolved intellectual lens. Given that the focal point is instinctive, the reaction becomes an irresistible urge to consume or possess this honeycomb, as instincts cannot perceive a discarnate boundary such as that of person, institution and/or system.

Therefore, as these officers sit across from the institution’s treasury, in the same way they sit across from the season’s grain harvest in the granary on their farm, they cannot help but raid it. If their signature can transfer funds from the institution’s bank account in the same way their signature facilitates transfer from their personal accounts, then what possibly makes the institution’s bank account different i.e. not a personal bank account?

Granted, there is scholarly consensus that Homo Economicus (the latest and arguably the most wicked of all the evolutionary stages of mankind), is a sociopath, whose folly was preciently captured by Jonathan Swift in his 1729 satirical article suggesting that the impoverished Irish might ease their economic troubles by selling their children as food for rich gentlemen and ladies.

Still, the existence of this disorder in the West should be no consolation because Homo Economicus in Western society commits his genocide and fetal cannibalism in the villages of the people of the East and Global South. But African Homo Economicus commits filial cannibalism. How else would we term indenturing our own children as chattel by taking out high interest loans and sovereign bonds on their futures and then diverting those funds into private accounts, or accepting payments to dump nuclear and toxic waste in our backyards?

As for the second systemic cause? Capitalism.

The unstoppable march toward absolute implementation of this pernicious socio-political and economic order portends nothing but misery for humanity. The uncritical application of its core doctrinal pillar that calls for individual freedom of ownership ultimately results in what Marx called “primitive accumulation” and more recently translated by Prof. David Harvey as “accumulation by dispossession”.

In our context, this manifests as the transfer of collective (public) wealth into the hands of private, well-connected owners through privatisation of public resources like water, energy, minerals and public infrastructure, including healthcare and utilities, and the transfer of individual citizens’ wealth into the same private hands through usury, taxes and inflation.

Essentially, this creates a society where the only way one can get bread and water to feed one’s child is by taking the bread and water of someone else’s child. This may be the reason why there is little investment in understanding the root cause analysis of the problem of corruption — it would reveal the witch’s cloven hoof. It is the nature of the beast. It is capitalism.

And as the population increases and the walls around the public’s resources continue to be erected and extended, the masses’ incomes and wealth continue to be systematically harvested through quantitative easing (printing fiat), bonds (usury) and taxes, and then consolidated and transferred to the top through the global banking and financial system. It results in a fight at the bottom that grows ever more vicious, manifesting itself in perpetual war, slavery, dehumanising poverty and misery.

The only way to halt this accelerating bottomless spiral downward is to reject Man as Sovereign, end capitalism and establish a system that will nurture the (collective) Man using divine provision (natural resources), and protecting private property and wealth creation from expropriation by taxation, usury and inflation.

Simply, change the system.

Published by the good folks at The Elephant.

The Elephant is a platform for engaging citizens to reflect, re-member and re-envision their society by interrogating the past, the present, to fashion a future.

Follow us on Twitter.

Gen Z, the Fourth Industrial Revolution, and African Universities

By Paul Tiyambe Zeleza While most Kenyans associate formal education with institutional schooling, a significant number of their compatriots have opted for homeschooling. Homeschooling is not a specific curriculum, but rather the implementation of a curriculum by the parents themselves and/or their own directly chosen delegates. With the dominance of institutional schooling, many now view homeschooling as part of alternative education.

Several Kenyan families have homeschooled their children from the early 1990s using a variety of curricula, including 8-4-4, I.G.C.S.E., and Accelerated Christian Education. A number of Kenyan children have completed their high school education through homeschooling and have been admitted to universities inside and outside Kenya, and several are already employed, while others have ventured into entrepreneurship.

The Constitution of Kenya recognises the right of the child to education. Article 43 (1) (f) lists education as one of the fundamental rights of every person. Furthermore, Article 53 (1) (b) states that every child has the right to free and compulsory basic education. Nevertheless, neither of the articles limits education to the school environment.

However, Homeschooling Kenyan parents have expressed concern over provisions in the Basic Education Act 2013 that presume education can only be attained through institutionalised schools. For example, Article 28 of the Act, titled “Right of Child to Free and Compulsory Education”, states that “The Cabinet Secretary shall implement the right of every child to free and compulsory basic education” (Article 28(1)), but the tenor of the Act is that such education can only happen in the context of an institutional school.

The homeschooling community in Kenya is already feeling the effects of the Basic Education Act 2013 limiting education to the school environment. The Daily Nation carried a story on the 18th February this year, about the arrest of Silas Shikwekwe Were in Malaba, later arraigned in a Butali court in Kakamega County for allegedly abdicating his duty to enroll his children in school. Mr. Were and Mr. Onesmus Mboya Orinda filed Constitutional petition No. 236/19 at the High Court, Milimani Law Courts, asking the court to recognise home schooling as a legal and viable alternative method of according children in Kenya their right to education. They argue in their petition that the provisions of the Basic Education Act 2013 requiring a parent to enroll a child to an institution of learning limits the scope of what education is. They aver that sections of the Basic Education Act 2013 infringe on the rights of parents to determine the forum and manner in which their children will be educated. During the first mention of the Constitutional Petition on 25th June 2019, the courtroom was packed with homeschooling parents.

A number of considerations have led some Kenyan parents to choose homeschooling over institutional schooling.

A short history of schools

Schools have been a part of human societies for thousands of years. Among some of the peoples of Africa, the age groups system used to pass on knowledge, skills and attitudes to the young adults. It entailed a degree of deliberate, formal passing on of knowledge, skills and attitudes in a manner reminiscent of a contemporary school. There were schools in the ancient societies of Egypt, India, China, Greece, and Rome. The Byzantine Empire had an established school system until the fall of the Empire in 1453 C.E. In Muslim societies, mosques combined both religious observances and learning activities, but by the 9th century, the madrassa arose as a separate institution from the mosque. In Western Europe, a number of cathedral schools were founded during the Early Middle Ages in order to teach future clergy and administrators. Mandatory school attendance became common in parts of Europe during the 18th century, with the aim of increasing the literacy of the masses.

Formal schools become widespread only during the past two centuries. With the advent of the Western Scientific Revolution, certain fields of knowledge became highly specialised, making it significantly more difficult for parents to help their children to master them. The rise of factories during the Industrial Revolution led to the need for mass formal schooling to inculcate requisite habits in the workforce – punctuality, adherence to instructions, among others.

Education is the primary responsibility of parents, not schools

The word “education” comes from the Latin word ēducātiō, which literally means “breeding”, “bringing up”, or “rearing”, all of which are primarily associated with parents rather than with schools. Indeed, theorists of education frequently define education as the deliberate, planned equipping of the young with knowledge, skills and attitudes that enable them to participate effectively in the life of society. Again, such equipping is primarily the responsibility of parents, not schools. For most of human history, parents have been in charge of their children’s education. Their homes served as spaces for imparting social values and etiquette and particular trades. Families were known for certain trades. The presence of English surnames such as Tailor, Cook, Baker, and Smith partly explains this naming practice.

Formal schools become widespread only during the past two centuries. With the advent of the Western Scientific Revolution, certain fields of knowledge became highly specialised, making it significantly more difficult for parents to help their children to master them.

Despite the rise of universal compulsory education through schools, the responsibility of providing education primarily rests with parents as part of their wider responsibility to provide for their children. Parents who take their children to school are delegating rather than abdicating this responsibility, and this is evident in the practice of schools regularly meeting parents to brief them on their children’s progress. As such, parents who choose homeschooling are simply choosing to discharge their responsibility directly rather than delegating it to the schools.

Direct and consistent parental involvement in moulding character

In Philosophy and Education in Africa, R.J. Njoroge and G.A. Bennaars point out the four dimensions of education: the cognitive dimension entails the acquisition of knowledge; the normative dimension has to do with the inculcation of values; the procedural/creative dimension involves the approach or methodology through which the knowledge and values are acquired; the social/dialogical dimension entails the fact that education is an interactive process within human groups rather than in solitude.

The rise of factories during the Industrial Revolution led to the need for mass formal schooling to inculcate requisite habits in the workforce – punctuality, adherence to instructions, among others

Regrettably, in our day, many think that education (formal education) is exclusively geared to equipping students with knowledge (the cognitive dimension). It is no wonder we have so many highly skilled people whose ethical orientation is grievously wanting. Many parents who choose homeschooling seek to be directly and consistently involved in moulding their children’s character throughout their formal education on the basis of the conviction that with good moral and mental habits, high academic achievement and success in career are almost guaranteed.

There is consensus among theorists and practitioners of education that the ideal model of education is one in which the child gets maximum personalised attention in order to take care of his or her uniqueness. Harvard’s educational psychology Prof. Howard Gardner pointed out that human beings have multiple intelligences (“learning styles”), and that each of us uses one or two of them to learn most effectively. Following Gardner’s approach, the US-based Institute of Learning Styles Research has identified seven learning styles, highlighting the various ways in which different people learn most effectively using their five senses.

The seven learning styles are print (looking at printed or written text), aural (listening), visual (looking at depictions such as pictures and graphs, haptic (touch or grasp), interactive (verbalisation), kinesthetic (whole-body movement), and olfactory (smell and taste). Schools typically focus on the three competencies referred to in Western tradition as “the 3Rs” – reading, writing and reckoning (calculating), and mainly approach learning from a verbal and logical perspective, thereby largely neglecting people whose learning styles cannot cope with this approach.

By the very nature of the size of a typical family, a home-schooled child gets much better personalised attention than a child in a typical institutional Kenyan public school where one teacher attends to tens of pupils in one lesson. When Kenya’s National Rainbow Coalition (NARC) government introduced Free Primary Education in Kenya in early 2003: the number of pupils rose dramatically, but the number of teachers, classrooms and other facilities by and large remained unchanged. The quality of learning was significantly compromised. Some short-staffed schools had to ran shifts to accommodate the pupils. By and large, the school system moves the pupils from class to class regardless of how much they have actually learnt; and the few who are required to repeat a year for extremely poor performance suffer the humiliation of doing so among their peers.

The dire implications of a grossly unhealthy teacher-to-pupil ratio quickly showed. From 2009, Uwezo initiative implemented large-scale household surveys to assess the actual basic literary and numeracy competencies of school-aged children across Kenya, Tanzania and Uganda, culminating in annual reports. A July 2013 newspaper headline on one of those reports declared: “Over 50 Per Cent of Class 8 Pupils can Barely Read – Report”. The article stated “The report by Uwezo Kenya also reveals that over 50 out of 100 children in Classes Four and Five can’t comprehend stories written for class two pupils.” In its Sixth Annual Report covering the year 2015, Uwezo observed, “Assessments across Kenya, Tanzania and Uganda have highlighted the learning crisis since 2010. The key observation has been that budgets and other inputs to learning have been increasing steadily, but learning outcomes have remained essentially stagnant.”

Personalised attention is critical for exceptionally gifted children and for children with disabilities. Exceptionally gifted children who master concepts and skills grow bored when subjected to the average pace of learning: they might be able to complete the tasks assigned for one year in three months. Requiring them to sit in school and learn at the pace of the average child is torturous mass production, not education.

Children with disabilities often need special, even specialised attention to learn effectively. The Kenyan public school system is grossly ill-equipped to provide education for children with autism, so parents of autistic children have to equip adequately for the task of schooling. The class size at Thika School for the Blind where I went to school was fifteen rather than the prescribed forty in a typical public school. In certain subjects such as maths, geography and biology where teachers rely heavily on chalkboards and other visual teaching aids, children with visual disabilities would be left behind unless there was a resource teacher to offer extra support. At Thika, the teacher spent considerable time with each student helping them to appreciate maps, diagrams, graphs and maths formula. The parents of children with different disabilities ought to have the liberty to home-school them if they are able and willing to do so. Indeed, such liberty would affirm the right of children with disabilities to high-quality education in line with Articles27 and 54 of the Constitution.

History offers a number of cases of exceptionally gifted children who performed very poorly in school because the school system could not cater for their learning disabilities. People with dyslexia (reading and writing difficulties) or dyscalculia (difficulties with maths) are cases in point. English scientist Michael Faraday was a person with dyslexia, and yet through personal study he made numerous ground-breaking discoveries and inventions, including electromagnetic rotary devices that were fundamental to the development of electric motor technology used to generate electrical power. Albert Einstein had difficulty in school due to dyslexia, and his achievements can be attributed to his ability to teach himself. Some of the other famous Western scientists with dyslexia include Alexander Bell, Galileo Galilei and Thomas Edison.

A friend of mine confessed that he could not read at all by Standard 3; his first attempt at the then Certificate of Primary Education (CPE) exam at a Nairobi Eastlands primary school yielded dismal results. His mother then took him to a high-cost primary school to re-do the CPE and he excelled: today he is a university lecturer in pure Maths – the most abstract branch of mathematics.

I know of several parents with university degrees who have chosen to stay at home to provide their children with quality education. I am acquainted with a parent who holds a Masters degree in educational Psychology who has chosen to provide homeschooling for her children instead of pursuing a career in the schools or colleges. Some parents would give up pursuing their careers to home-school their children to shield them from the associated dangers of institutional day and boarding schools. It is evident that the home-schooled children of such parents enjoy certain advantages over their counterparts in institutional schools.

The commute to school is associated with several challenges. Day schooling children are exposed to undesirable social elements and dangers in their daily commute. Those parents determined to find quality schooling for their children beyond the limits of a neighbourhood, get up as early as 4:30 a.m. to prepare children to board buses at 5:30 a.m. to schools on the other side of town, in a continuous dawn to dusk routine. Boarding schools come with their own set of social challenges such as unhealthy competition and bullying that have lasting effects on many young lives.

Home-schools as private schools

The culture of private schools is entrenched in Kenya. The children of prominent Public officials attend private schools rather than public ones. Private schools have better educational facilities, better teacher-to-pupil ratios, leading to better performance in public examinations.

Economic realities and high tuition fees prevent many parents from the privilege of private schools but rarely considered is the option of responding with privately tutored high-quality schooling at home. Home-schooled children register as private candidates for public exams, and some of them have done exceptionally well.

There is really no essential difference between private schooling and homeschooling: both models of learning are a move away from public schools. Denying a section of society the right to educate children at home is discriminatory contrary to Article 27 of the Constitution of Kenya.

Parents with highly mobile careers that require them to relocate frequently find changing schools almost always a traumatic experience for children, as they must make new friends, and adapt to the new physical environment and new teachers. This challenge is greater in situations where children are frequently moved from one cultural context with its education system to another with a different education system. If one of the parents in a family going through this kind of experience is available to home-school the children, the homeschooling experience provides a point of stability for the children which significantly mitigates the trauma.

Parents of home-schooled children form networks that facilitate regular joint activities among their children: they visit places such as museums, factories, universities, go boat-rowing, and attend music concerts. They enroll their children in various activities outside their homes such as football, swimming, and music lessons. In addition, there are joint annual events for various homeschooling communities.

Contrary to the belief that the school environment is the best for learning social skills, it often inculcates unhealthy competition rather than co-operation. The idea that all children progress intellectually at roughly the same pace is ingrained in the thinking of the school system, and yet it necessarily works against the need to cater for the children’s individual differences. Pupils are often contented to come out top of their class irrespective of the fact that any class will have a top student however low the quality of learning. An emphasis on coming out top in class easily encourages contentment with mediocrity rather than the pursuit of excellence. Schools function in a specific social context, and is a reflection of that context.

Thus with the increasing erosion of social values, schools are now places where children learn some grossly anti-social habits such as violence and substance abuse.

Homeschooling and social class

Members of the Kenyan middle class are more likely to be inclined to homeschooling: this is mainly due to the fact that they are likely to appreciate educational theory and practice. Middle-class parents are likely to afford the availability of one parent devoted to homeschooling their children. Parents in low-income brackets cannot live off the salary of one spouse. Nevertheless, homeschooling is not the exclusive province of the middle class and the wealthy.

Most homeschooling parents use officially sanctioned curricula. Some of the curricula used in home- schools require that parents get training before embarking on using them. Homeschooling parents also benefit from the resources of homeschooling organisations outside Kenya, including Global Home Education Exchange, Home School Legal Defense Association, and National Home Education Research Institute.

Kenyan universities typically assess students for admission using the results of the Kenya Certificate of Secondary Education (KCSE) results whether a candidate sits for the exam in a public school, private school, or is privately registered as would typically be the case for Home-schooling candidates. For those home-schooled students going through other curricula, the Kenya National Examinations Council has a system of interpreting results to indicate their 8-4-4 equivalents, thereby enabling universities to make informed decisions about admissions.

For overseas education, various clusters of universities use a variety of entry tests to assess students who have gone through different education systems to determine whether or not they have the requisite skills (such as language profficiency, comprehension, reasoning, and basic maths) to handle university work. American universities rely on tests such as the SAT set by The College Board to assess applicants for undergraduate courses, and the Graduate Record Examinations (GRE) administered by the Educational Testing Service (ETS) for applicants to post-graduate courses.

There is no evidence that children who have gone through Home-schooling are disadvantaged in comparison with those who have attended institutional schools. The U.S.-based National Home Education Research Institute, score the home-educated 15 to 30 percentile points above public- school students on standardized academic achievement tests (the public school average is the 50th percentile; scores range from 1 to 99). Home-schooled students typically score above average on the standardised tests such as SAT. The measured conclusion of the Institute is: “It is possible that Home-schooling causes the positive traits reported above. However, the research designs to date do not conclusively ‘prove’ that Home-schooling causes these things. At the same time, there is no empirical evidence that Home-schooling causes negative things compared to institutional schooling.”

I first met a home-schooled child in the mid-1990s. She was the daughter of friends of mine and also neighbours in Nairobi’s Buruburu Estate. At the age of five, she was already confident and articulate. This began to dispel my doubts about Home-schooling. About ten years ago, I was requested to help look at some research papers written by some home-schooled high school finalists using a different curriculum from 8-4-4. I was pleasantly surprised to find out that unlike their counterparts in the 8-4-4 system, they were considerably well acquainted with library research and writing: they intelligently cited various books and articles using footnotes, and meticulously laid out their lists of references in a manner reminiscent of what is expected of first-year university students in Kenya! I have also had an opportunity to facilitate a “Thinking Skills” course for five home- schooled high school students, and was impressed by their confidence, clarity of thought and expression, and keenness to learn.

In view of my reflections, parents who are willing and able to provide personalised education for their children at home, often at great sacrifice to themselves, ought not to be denied the right to do so. The Home-schooling community is not lobbying for the abolition of schools: it appreciates that not everyone can home-school because some are ill-equipped for the task, while others are obligated to work to provide for their families. Nonetheless, society is enriched by diversity, including diversity in approaches to formal education. What we must all ensure is that whether through institutional schooling or Home-schooling, the child gets the knowledge, skills and attitudes that enable him or her to contribute positively to the holistic development of both himself or herself and society. Published by the good folks at The Elephant.

The Elephant is a platform for engaging citizens to reflect, re-member and re-envision their society by interrogating the past, the present, to fashion a future.

Follow us on Twitter.

Gen Z, the Fourth Industrial Revolution, and African Universities

By Paul Tiyambe Zeleza

The second annual CyFyAfrica 2019, the Conference on Technology, Innovation, and Society, took place in Tangier, Morocco, 7 – 9 June 2019. It was a vibrant, diverse and dynamic gathering where various policy makers, United Nations delegates, ministers, governments, diplomats, media, tech corporations, and academics from over 65 nations, mostly African and Asian countries, attended.

The conference unapologetically stated that its central aim was to bring forth the continent’s voices to the table in the global discourse. The president’s opening message emphasised that Africa’s youth need to be put at the front and centre of African concerns as the continent increasingly relies on technology for its social, educational, health, economic and financial issues. At the heart of the conference was the need to provide a platform to the voice of young people across the continent. And this was rightly so. It needs no argument that Africans across the continent need to play a central role in determining crucial technological questions and answers not only for their continent but also far beyond.

In the technological race to advance the continent, there are numerous cautionary tales that Africans need to learn from, tales that those involved in the designing, implementing, importing, regulating, and communicating of technology need to be aware of.

The continent stands to benefit from various technological and artificial intelligence (AI) developments. Ethiopian farmers, for example, can benefit from crowd sourced data to forecast and yield better crops. The use of data can help improve services within the healthcare and education sectors. Huge gender inequalities that plague every social, political, economic sphere can be brought to the fore through data. Data that exposes gender disparities in these key positions, for example, renders crucial the need to change societal structures in order to allow women to serve in key positions. Such data also brings general awareness of inequalities, which is central for progressive change.

In the technological race to advance the continent, there are numerous cautionary tales that Africans need to learn from, tales that those involved in the designing, implementing, importing, regulating, and communicating of technology need to be aware of.

Having said that, there already exist countless die-hard technology worshippers, some only too happy to blindly adopt anything data-driven and “AI” without a second thought of the possible unintended consequences, both within and outside the continent. Wherever the topic of technological innovation takes place, what we constantly find is advocates of technology and attempts to digitise every aspect of life, often at any cost.

At the CyFyAfrica 2019 conference, there were plenty of such tech evangelists – those blindly accepting ethically suspect and dangerous practices and applications under the banner of “innovative”, “disruptive” and “game changing” with little, if any, criticism and scepticism. Therefore, given that we have enough tech-worshippers holding the technological future of the continent in their hands, it is important to point out the precautions that need to be taken and the lessons that need to be learned from other parts of the world as the continent races forward in the technological race.

Reinforcing biases and stereotypes

Just like the Silicon Valley enterprise, the African equivalent of tech start-ups and innovations can be found in every corner of the continent, from Addis Ababa to Nairobi and Cape Town. These innovations include in areas such as banking, finance, healthcare, education, and even “AI for good” initiatives, both by companies and individuals within as well as outside the continent. Understandably, companies, individuals and initiatives want to solve society’s problems and data and AI seem to provide quick solutions. As a result, there is a temptation to fix complex social problems with technology. And this is exactly where problems arise.

In the race of which start-up will build the next smart home system or state-of-the-art mobile banking application, we lose sight of the people behind each data point. The emphasis is on “data” as something that is up for grabs, something that uncontestedly belongs to tech-companies, governments, and the industry sector, completely erasing individual people behind each data point. This erasure of the person behind each data point makes it easy to “manipulate behaviour” or “nudge” users, often towards profitable outcomes for the companies. The rights of the individual, the long-term social impacts of AI systems and the unintended consequences on the most vulnerable are pushed aside, if they ever enter the discussion at all. Be it small start-ups or more established companies that design and implement AI tools, at the top of their agenda is the collection of more data and efficient AI systems and not the welfare of individual people or communities.

Rather, whether explicitly laid out or not, the central point is to analyse, infer, and deduce users’ weaknesses and deficiencies for the benefit of commercial firms. Products, ads, and other commodities can then be pushed to individual “users” as if they exist as an object to be manipulated and nudged towards certain behaviours deemed “correct” or “good” by these companies and developers.

The result is AI systems that alter the social fabric, reinforce societal stereotypes and further disadvantage those already at the bottom of the social hierarchy. UN delegates addressing the issue of online terrorism and counterterrorism measured and exclusively discussed Islamic terrorist groups, despite the fact that white supremacist terrorist groups have carried out more attacks than any other group in recent years. This illustrates an example where socially held stereotypes are reinforced and wielded in the AI tools that are being developed.

Although it is hardly ever made explicit, many of the ethical principles underlying AI rest firmly within utilitarian thinking. Even when knowledge of unfairness and discrimination of certain groups and individual as a result of algorithmic decision-making are brought to the fore, solutions that benefit the majority are sought. For instance, women have been systematically excluded from entering the tech industry, minorities are forced into inhumane treatment , and systematic biases have been embedded in predictive policing systems, to mention but a few. However, although society’s most vulnerable are disproportionally impacted by the digitisation of various services, proposed solutions to mitigate unfairness hardly consider such groups as crucial pieces of the solution.

A question of ethics

Machine bias and unfairness is an issue that the rest of the tech world is grappling with. As technological solutions are increasingly devised and applied to social, economic and political issues, the problems that arise with the digitisation and automation of everyday life also become increasingly evident. The current attempts to develop “ethical AI” and “ethical guidelines”, both within the Western tech industry and the academic sphere, illustrate awareness and attempts to mitigate these problems. The key global players in technology, Microsoft and Google’s DeepMind from the industry sector and Harvard and MIT from the academic sphere are primary examples that illustrate the recognition of the possible catastrophic consequences of AI on society. As a result, ethics boards and curriculums on ethics and AI are being developed.

These approaches to develop, implement and teach responsible and ethical AI take multiple forms, perspectives, and directions and emphasise various aspects. This multiplicity of views and perspectives is not a weakness but rather a desirable strength that is necessary for accommodating a healthy, context-dependent remedy. Insisting on one single framework for various ethical, social and economic issues that arise in various contexts and cultures with the integration of AI is not only unattainable but also advocating a one-size-fits-all style dictatorship.

Machine bias and unfairness is an issue that the rest of the tech world is grappling with. As technological solutions are increasingly devised and applied to social, economic and political issues, the problems that arise with the digitisation and automation of everyday life also become increasingly evident.

Nonetheless, given the countless technology-related disasters and cautionary tales that the global tech-community is waking up to, there are numerous crucial lessons that African developers, start- ups and policy makers can learn from. The African continent needs not go through its own disastrous cautionary tales to discover the dark side of digitisation and technologisation on every aspect of life.

AI over-hype

AI is not magic; it is a buzz word that gets thrown around so carelessly that it has increasingly become vacuous. What AI refers to is notoriously contested and the term is impossible to define conclusively – and it will remain that way due to the various ways various disciplines define and use it. Artificial intelligence can refer to anything from highly overhyped and deceitful robots to ’s machine learning algorithms that dictate what you see on your News Feed, your “smart” fridge and everything in between. “Smart”, like AI, has increasingly come to mean devices that are connected to other devices and servers with little to no attention being paid to how such hypoconnectivity at the same time creates surveillance systems that deprive individuals of their privacy.

Over-hyped and exaggerated representation of the current state of the field poses a major challenge. Both researchers within the field and the media contribute to this over-hype. The public is often made to believe that we have reached AGI (Artificial General Intelligence) or that we are at risk of killer robots taking over the world, or that Facebook’s algorithms have created their own language forcing Facebook to shut down its project , when none of this is in fact correct.

The robot known as Sophia is another example of AI over-hype and misrepresentation. This robot, which is best described as a machine with some face recognition capabilities and a rudimentary chatbot engine, is falsely described as semi-sentient by its maker. In a nation where women are treated as second class citizens, the United Arab Emirates (UAE) has granted this machine citizenship, thereby treating this “female” machine better than its own female citizens. Similarly, neither the Ethiopian government nor the media attempted to pause and reflect on how the robot’s stay in Addis Ababa should be covered. Instead the over-hype and deception were amplified as the robot was treated as some God-like entity.

Leading scholars in the field, such as Mitchell, emphasise that we are far from “superintelligence”. The current state of AI is marked by crucial limitations, such as the lack of understanding of common sense, which is a crucial element of human understanding. Similarly, Bigham emphasises that in most of the discussion regarding “autonomous” systems (be it robots or speech recognition algorithms), a heavy load of the work is done by humans, often cheap labour – a fact that is put aside as it doesn’t bode well with the AI over-hype narrative.

Over-hype is not only a problem that portrays an unrealistic image of the field, but also one that distracts attention away from the real danger of AI, which is much more invisible, nuanced and gradual than “killer robots”. The simplification and extraction of human experience for capitalist ends, which is then presented as behaviour based “personalisation”, is a banal practice on the surface but one that needs more attention and scrutiny. Similarly, algorithmic predictive models of behaviour that infer habits, behaviours and emotions need to be of concern as most of these inferences reflect strongly held biases and unfairness rather than getting at any in-depth causes or explanations. The continent would do well to adopt a dose of critical appraisal when presenting, developing and reporting AI. This requires challenging the mindset that portrays AI as having God-like power. And seeing AI as a tool that we create, control and are responsible for, not as something that exists and develops independent of those that create it. And like any other tool, AI is one that embeds and reflects our inconsistencies, limitations, biases, political and emotional desires.

Technology is never either neutral or objective – it is like a mirror that reflects societal bias, unfairness and injustice. AI tools deployed in various spheres are often presented as objective and value-free. In fact, some automated systems that are put forward in domains, such as hiring and policing, are put forward with the explicit claim that these tools eliminate human bias. Automated systems, after all, apply the same rules to everybody. Such a claim is, in fact, one of the single most erroneous and harmful misconceptions as far as automated systems are concerned.

The continent would do well to adopt a dose of critical appraisal when presenting, developing and reporting AI. This requires challenging the mindset that portrays AI as having God-like power.

As the Harvard mathematician, Cathy O’Neil [19] explains, “Algorithms are opinions embedded in code.” This widespread misconception further prevents individuals from asking questions and demanding explanations. How we see the world and how we choose to represent the world are reflected in the algorithmic models of the world that we build. The tools we build necessarily embed, reflect and perpetuate socially and culturally held stereotypes and unquestioned assumptions. Any classification, clustering or discrimination of human behaviours and characteristics that our AI system produces reflects socially and culturally held stereotypes, not an objective truth.

UN delegates working on online counterterrorism measures but explicitly focusing on Islamic groups despite over 60 percent [20] of mass shootings in 2019 the USA being carried out by white nationalist extremists illustrate a worrying example that stereotypically held views drive what we perceive as a problem and furthermore the type of technology we develop.

Algorithmic decision-making

A robust body of research, as well as countless reports of individual personal experiences, show that various applications of algorithmic decision-making result in biased and discriminatory outcomes. These discriminatory outcomes affect individuals and groups that are already on society’s margins, those that are viewed as deviants and outliers – people that refuse to conform to the status quo. Given that the most vulnerable are affected by technology the most, it is important that their voices are central in any design and implementation of any technology that is used on/around them. Their voices need to be prioritised at every step of the way, including in the designing, developing, and implementing of any technology, as well as in policy-making.

As Africa grapples between catching up with the latest technological developments and protecting the consequential harm that technology causes, policy makers, governments and firms that develop and apply various tech to the social sphere need to think long and hard about what kind of society we want and what kind of society technology drives. Protecting and respecting the rights, freedoms and privacy of the very youth that the leaders want to put at the front and centre should be prioritised. This can only happen with guidelines and safeguards for individual rights and freedom in place.

A robust body of research, as well as countless reports of individual personal experiences, show that various applications of algorithmic decision-making result in biased and discriminatory outcomes.

Mining people for data

Invasion of privacy is an issue that is increasingly becoming an issue in every sphere of life, including insurance, banking, health and education services. Various start-ups are emerging from all corners of the continent at an exponential rate to develop the next “cutting edge” app, tool or system; to collect as much data as possible and then infer and deduce “users’” various behaviours and habits.

However, there seems to be little, if any, attention paid to the fact that digitisation and automatisation of such spheres necessarily bring their own, often not immediately visible, problems. In the race to come up with the next new “nudge” mechanism that could be used in insurance or banking, the competition for mining the most data seems the central agenda. These firms take it for granted that such data, which is out there for grabs, automatically belongs to them. The discourse around “data mining” and “data rich continent” shows the extent to which the individual behind each data point remains non-existent. This removing of the individual (individual with fears, emotions, dreams and hopes) behind each data set is symptomatic of how little attention is given to privacy concerns. This discourse on “mining” people for data is reminiscent of the coloniser attitude that declares humans as raw material free for the taking.

Data is necessarily always about something and never about an abstract entity. The collection, analysis and manipulation of data possibly entails monitoring, tracking and surveilling people. This necessarily impacts them directly or indirectly, whether it is change in their insurance premiums or refusal of services.

AI technologies that are aiding decision-making in the social sphere are developed and implemented by private companies and various start-ups for the most part, whose primary aim is to maximise profits. Protecting individual privacy rights and cultivating a fair society is, therefore, the least of their priorities, especially if such practice gets in the way of “mining”, freely manipulating behaviour and pushing products onto customers. This means that as we hand over decision-making regarding social issues to automated systems developed by profit-driven corporates, not only are we allowing our social concerns to be dictated by corporate incentives (profit), but we are also handing over moral and ethical questions to the corporate world.

“Digital nudges” – behaviour modifications developed to suit commercial interests – are a prime example. As “nudging” mechanisms become the norm for “correcting” an individual’s behaviour, eating habits or exercise routine, those corporates, private companies and engineers developing automated systems are bestowed with the power to decide what the “correct” behaviour, eating habit or exercise routine is. Questions, such as who is deciding what the “correct” behaviour is and for what purpose, are often completely ignored. In the process, individuals that do not fit the stereotypical image of what a “fit body”, “good health” and “good eating habit” end up being punished, ostracised and pushed further to the margin.

The use of technology within the social sphere often, intentionally or accidentally, focuses on punitive practices, whether it is to predict who will commit the next crime or who would fail to pay their mortgage. Constructive and rehabilitation questions, such as why people commit crimes in the first place or what can be done to rehabilitate and support those who have come out of prison are almost never asked. Technological developments built and applied with the aim of bringing security and order, necessarily bring cruel, discriminatory and inhumane practices to some. The cruel treatment of the Uighurs in China and the unfair disadvantaging of the poor are examples in this regard.

The question of technologisation and digitalisation of the continent is also a question of what kind of society we want to live in. African youth solving their own problems means deciding what we want to amplify and show the rest of the world. It also means not importing the latest state-of-the-art machine learning systems or any other AI tools without questioning what the underlying purpose is, who benefits, and who might be disadvantaged by the application of such tools.

Moreover, African youth playing in the AI field need to create programmes and databases that serve various local communities and which do not blindly import Western AI systems founded upon individualistic and capitalist drives. In a continent where much of the narrative is hindered by negative images, such as migration, drought, and poverty, using AI to solve our problems ourselves means using AI in a way we want to understand who we are and how we want to be understood and perceived – a continent where community values triumph and nobody is left behind.

Published by the good folks at The Elephant.

The Elephant is a platform for engaging citizens to reflect, re-member and re-envision their society by interrogating the past, the present, to fashion a future.

Follow us on Twitter.

Gen Z, the Fourth Industrial Revolution, and African Universities

By Paul Tiyambe Zeleza Africa is in a steep democratic recession. According to the Freedom House think tank, just 11 per cent of the continent is politically “free”, and the average level of democracy (understood as respect for political rights and civil liberties) has fallen in each of the last 14 years. The Ibrahim Index of African Governance shows that democratic progress lags far behind citizens’ expectations. The vast majority of Africans want to live in a democracy, but the proportion of those who believe they actually do falls almost every year. The future of African freedoms is in peril.

As for the “independence project” that birthed the current African states, it has been cannibalised by the political class which—apart from engaging in nefarious activities to consolidate power, gobbling up resources and terrorising the citizenry—has proven to lack the imagination to curate a vision for the continent. For now, we do not know what to do, nor do we know where and how to find the answers to address this socio-political crisis.

Moreover, liberal democracy—characterised by the enjoyment of legally guaranteed freedoms and rights by individuals—has wobbled over the past two decades. Today we are witnessing an upsurge in fascism, parochialism and narrow nationalisms as a backlash to a neoliberalism gone wild. All over Europe and in other parts of the world, a new kind of nationalism is in vogue.

The “independence project” that birthed the current African states has been cannibalised by the political class which—apart from engaging in nefarious activities to consolidate power, gobbling up resources and terrorising the citizenry—has proven to lack the imagination to curate a vision for the continent.

In Africa, where this model is a relatively recent import, its symptoms, including deepening inequality and the alienation and exclusion of entire sections of the population, form the most compelling economic trend of the era. Add to this, writes John Githongo, the growing currency of identity politics—which is extremely comforting in this era of existential uncertainties—and the symptoms of the malaise are manifesting themselves more quickly and causing more intense social and political dislocations than ever before. Ultimately, the economic logic of the market, and those who participate in it, is irrational; it does not typically self-correct and social/political distress intensifies the power of identity politics (religion, gender, tribe, clan, sect, etc.) and hollow populism. The convergence of political and economic interests in society has led to a corruption of democracy as it has come to be owned by oligarchies with the power to buy elections at worst and, at best, to purchase policy even in so-called mature democracies. As a result, democracy is threatened by a new wave of disaster capitalism which, at its core, is thriving on the subversion of the state for the extraction of resources.

Underlying all this is Western indifference and, sometimes, hostility. Today, even Francis Fukuyama, one of the most ardent proponents of the liberal democratic model, has acknowledged the erosion of political power and the decline in political trust in public affairs generally. Indeed, with the imminent collapse of the neoliberal model, the “end of history” mantra no longer holds any meaning.

In the case of Africa, the neoliberal ideological assault has already devastated the social fabric and, as spaces for progressive discourse and debate, our knowledge production centres have already been destroyed. For instance, notes Professor Issa Shivji, university structures have been corporatised. Courses have lost their integrity as they have been semesterised and modularised. Short courses proliferate. Basic research has been undermined as policy consultancy overwhelms faculty. Knowledge production has been substituted by online information gathering.

As a consequence, the recent rise of “new nationalisms” has caught intellectuals in the global South by surprise. They didn’t anticipate it and nor do they know how to react to it. Moreover, the fourth industrial revolution, which began at the turn of the century, builds on the digital revolution, characterised by machine learning and artificial intelligence, has fundamentally changed the arena of contestation for local and global narrative dominance. Past models of civic engagement are proving barren as traditional institutions (media, civil society and academia) are still struggling to find a footing in this new dispensation. The place of the intellectual in this digital Dark Age shall prove instrumental in helping society to make sense of itself.

The failed independence project

While the independence struggle delivered freedom and self-rule (at least in theory), the political freedoms envisaged and attained without a corresponding economic sovereignty to anchor and totalise these freedoms left black populations vulnerable to imperial influences and their cronies.

The effect of this is the collapse of the ‘independence project’ which has effectively not delivered on the aspirations that gave rise to the anti-colonial movements that birthed it. Fifty-plus years after independence, the African state is in a worse situation than it was at independence. Independence and all that it portends is now over. Crony capitalism is entrenched and the vast majority of the populations have become disillusioned with the State. Evidently, the palace coups, civil unrest and regime changes happening across Africa are symptomatic of a political class that has been devoured by its own contradictions.

This state of affairs, observes Kalundi Serumaga, presents our desperate, venal governing class with opportunities for greater venality. Having long exhausted whatever political legitimacy the “attainment of independence” gave them, they have continued looking for new means of obtaining some form of legitimacy even as they continue to plunder.

Moreover, the new opportunities for plunder are now blinding our leaders to the very real dangers of the unprincipled relationships that could leave our descendants in perennial debt bondage at best and a new form of slavery in a morbid form. This is the worst possible kind of group to have in charge of making the key decisions at this very critical point in African history.

Trade is war and international firms and tycoons understand this. The modern frameworks of international business decision-making are rooted in racism, predatory systems and opaque structures designed to rip off African resources using unmitigated and rigged international laws and concessions.

There are a number of ways in which neocolonialism and capitalism, individually as well as collectively, disinherit the African continent and rob it of critical resources meant for its people.

Moreover, the new opportunities for plunder are now blinding our leaders to the very real dangers of the unprincipled relationships that could leave our descendants in perennial debt bondage. This is the worst possible kind of group to have in charge of making the key decisions at this very critical point in African history.

Seven of the top ten largest firms in Kenya are British and the top 100 firms are heavily skewed towards foreign ownership. This is replicated across the continent. Private capital from racist and predatory Wall Street-listed firms generates undue pressure on hapless local leaders who either cave in to kickbacks or are voted out through buying the political influence of rival powers. These private capital tentacles have sunk deep into African society, exerting incredible pressure on the direction and nature of the legislation that is passed and implemented across Africa.

Modern barbarians

The wobbling Euro-American edifice, which is the culmination of the 2000-year-old Greco-Roman- Hebrew Caucasian civilisational instinct, portends a return to a new Dark Age. While there exist never-ending contestations about when a historical period starts and when it ends, historians often structure civilisations as having gone through nine socio-political stages lasting about 250 years. A civilisation accommodates two to three empires and lasts roughly 500 years. The much-hyped decline of the United States, therefore, marks not just the decline of an empire but, by extension, the eventual decay and decline of the Euro-American superstructure.

The prophesies by historians like Jim Nelson Black and Charles Colson largely point to the return of barbarian instincts dominated by modern barbarians—not like the Huns, the Visigoths, the Ostrogoths and Vandals of the 400s AD—but with a new form of barbarism. A casual foray into the politics of identity reveals a bizarre strain of unchecked instincts going as far as to seeking to legalise paedophilia as part of minority politics. The barbarian of the new Dark Age is therefore said to be well-attuned to the social finesse of modernity while still harbouring the dark primitivism of unfettered tastes and desires. He is able to justify the most grotesque of beliefs with the finest eloquence of language and fluency of ideas.

Africa could dominate the next century

Meanwhile, Africa’s rediscovery of its ancient heritage is founded on a cultural production that is largely aided by a soaring interest in the realities of ancient African civilisations, a re-forging of African identities and a democratisation of knowledge production and dissemination by digital media and other alternative platforms. The African imaginary in the main thus far seems to be largely secular, quite reactionary, and predicated on the import of identity politics from the West. Truth is, the current global shift occasioned by the rise of new empires such as China and India is precipitating a fluidity of ideas in the international marketplace in such a way that if Africa manoeuvres strategically in that marketplace, it could dominate the next century.

In the cycle of human civilisation, with its periods of growth and downturns spanning centuries, Africa has also inevitably occupied a dominant position by waging war against Rome and other empires. In total, of the 200 empires chronicled to have dominated the last 6,000 years, at least 37 were either African or extended to Africa, bringing with them civilisational goodies from across the Mediterranean and the Red Sea.

As is the norm with imperial dominance, each African empire infused human existence with certain sensibilities in the zigzagging path from ancient history to modernity. From law and politics, philosophy, art and social courtesies, moral codes and military prowess, each empire possesses a dominant ethic which aids its ascent, and which it bequeaths to the world.

In the cycle of human civilisation, with its periods of growth and downturns spanning centuries, Africa has also inevitably occupied a dominant position by waging war against Rome and other empires. In total, of the 200 empires chronicled to have dominated the last 6,000 years, at least 37 were either African or extended to Africa…

Even supposing the absence of a clear export to the wider human race, at the very least Africans can take pride in the mere existence and sophistication of these empires and ancient cities. Axum, for example, was among the first empires to fully endorse an official religion around the same time as Constantine issued his edict in the 300s AD. Although one may argue against the nationalisation of religion—more so Christianity, given the hegemonic undertones of such endorsement—such a move unifies the metaphysics of an empire, providing its citizens with a commonality of ethics and moral codes.

And so, for an African renaissance to flourish, a line has to be drawn in the sands of history reconnecting the broken and disjointed retelling of African history such that the end product is a wholesome narration of the path the African soul has trodden from the medieval world into modernity. In the arts, a string from Timbuktu and Alexandria to modern studies about Africa; in military strategy, a link between the Great Hannibal Barca of North Africa to modern military strategies.

The recent uprisings in Algeria and Sudan have ignited revolutionary fervour across sub-Saharan Africa, rekindling a hope and a desire for change, whose final outcome isn’t yet clear. Political revolutions, unconnected from clear pedagogy, can easily precipitate unintended chaos on a scale often far more anarchic than the organised repression of the toppled regime. Revolutions devoid of a guiding ideology and a critical pool of enlightened individuals generate a crusading fervour that is a recipe for ever greater barbarism.

A coalescing of historical forces, renewed knowledge production and an Africa teeming with continental artists, intellectuals, writers, entertainers, and local conglomerates, from media houses, and record labels to nightclubs, manufacturing plants, civic organisation, religious movements and theatres, can help fuel a thriving African renaissance.

Currently, the 54 states that lie within the colonial African boundaries have succumbed to the lightning speed of technology and finance in ways such that the utility value of nation-states as the critical form of organisation must give way to cross-border cultural liaisons and imports. Communitarianism revives the age-old desire for new forms of human organisation unmitigated by the ever expanding bureaucracy of statecraft and its burdening tentacles.

By its very nature renaissance in and of itself carries a level of in-built cultural awakening which potentially infuses a vibrant consciousness in the masses. Contrasted with revolutions where the drastic takedown of symbolic leaders within the old structures creates an illusion of change, renaissance instigates the production of new knowledge, identity and consciousness with a far longer-lasting impact on group and self-identity. Writing on the Harlem Renaissance, the journalist and social critic Alain Locke referred to renaissance as “a spiritual coming of age” of blacks who were clasping their “first chances for group expression and self-determination”.

The resurgence of Nigerian literature, Tanzanian ethno-musicality, and greater sub-Saharan ownership of historical art anchored in gradual identity formation are hopeful manifestations of African renaissance in literature, business, stage performance, music, and the arts.

Digitisation and the attendant democratisation of cultural production and knowledge exchange amplifies critical yet marginalised voices in ways that upstage the age-old elitist models of knowledge production. Renaissance, therefore, isn’t so much the creation of newer forms of cultural and artistic expression as much as it is the retrieved anthropological knowledge of our ancient origins and developments. It is the drawing of a link to our unbroken African histories —grounded in a renewed interest in social production—which for now are sadly domiciled in imperial vaults across the oceans.

A demographic that is increasingly young and black, averaging 2.5 billion in number, will dominate the global landscape circa 2050AD and tilt the global racial numerical dominance towards the global South with massive implications. Demographic explosions, if coupled with distributive policies and expansionary goals, translate the numerical advantage into demographic dividends whose payoff lasts for decades. Conversely, when saddled with decaying nation-states led by kleptocratic and unimaginative elites within vassal states—such as in Kenya and South Africa—sharp increases in population translate into a demographic burden.

The resurgence of Nigerian literature, Tanzanian ethno-musicality, and greater sub- Saharan ownership of historical art anchored in gradual identity formation are hopeful manifestations of African renaissance in literature, business, stage performance, music, and the arts.

Africa does not have much time left. We face environmental collapse, ethnic cleansing and debt bondage. Decades of cultural propaganda have desensitised many of the youth to the dangers inherent in losing cultural sovereignty. This, coupled with the cynical and inept example set by the older generation in power, have created societies that are very vulnerable to any passing idea that could lead to a takeover. The urgency to reignite African consciousness given the rapid shift of the current global paradigms away from the Euro-American centre, places the burden of restitutive demands on the African intellectuals given that they are the current producers of knowledge. Demography isn’t always destiny and if not well managed, such a population explosion—and the rising pressure on nature and urban systems—could actually precipitate widespread ecological destruction.

Africa’s primary hope in many ways isn’t domiciled in the hare-brained ideas and visions peddled by middle-aged white men colluding in the plunder of African resources or the hegemonic gaze, whether facing East or West. The crucible of African renewal will be a deliberate decision by Africans to construct a narrative of a robust, generative, diverse identity born of the African experience.

Published by the good folks at The Elephant. The Elephant is a platform for engaging citizens to reflect, re-member and re-envision their society by interrogating the past, the present, to fashion a future.

Follow us on Twitter.

Gen Z, the Fourth Industrial Revolution, and African Universities

By Paul Tiyambe Zeleza

Over the past few weeks, the trade war between the United States and China rapidly escalated when President ’s administration took an extreme and unprecedented step against Huawei, the Chinese telecommunications giant. The US government declared a national emergency and issued an executive order banning American companies from using any technology that can pose a threat of espionage. The first foreign company that was included in this blacklist was Huawei, which was accused of acting on behalf of the Chinese government to undermine US national security.

Shortly after that, the American tech behemoth Google announced its decision to withhold its Android software from Huawei to prevent the Chinese company from exploiting vulnerabilities that could expose customers to serious cybersecurity and privacy risks. Other than just directly damaging Huawei’s smartphone business, this move set in motion the beginning of a technology Cold War that is quickly ramping up.

How is this global quarrel going to affect Africa?

Huawei is the largest cellphone provider in many African countries, such as South Africa, and has built at least 50 per cent of Africa’s 4G network, in addition to being a critical partner in many “smart city” projects. On the one hand, it’s in the best interest of African governments to maintain a good relationship with Google since this company is investing huge capital in developing Africa’s future artificial intelligence (AI) technologies. On the other hand, though, African countries’ dependence on China goes far beyond just telecommunications technology – today nearly 20 per cent of African governments’ external debt is owed to China, making this country the largest single creditor nation.

Huawei is the largest cellphone provider in many African countries, such as South Africa, and has built at least 50 per cent of Africa’s 4G network, in addition to being a critical partner in many “smart city” projects.

What does the future hold for Africa? Will this ongoing tech war force Africa to choose between the United States and China, or may it present an opportunity for this continent to play a relevant role in the global political scenario?

Trump’s trade war and the “national security threat” posed by Huawei

Why is Donald Trump barring US companies from engaging in telecommunications trade with Huawei and other foreign companies accused of jeopardising national security? And why did a global digital giant such as Google follow up by taking an even stronger position? There are a lot of reasons why the American president made such a bold and risky move, including curbing China’s apparently unstoppable technological advancement (especially in the AI field).

As Western societies, and America in particular, are facing a seemingly unstoppable cultural and political decline, it comes as no surprise that the global power balance is shifting in favour of the Russia/China axis. Many of the promises made by President Trump hold no substance so far, and many European forces see him as a threat to democracy and planetary stability. Sadiq Khan, the mayor of London, even went as far as comparing Trump’s language to that of “the fascists of the 20th century” just before the US president’s state visit to the United Kingdom. As the American giant is slowly crumbling under its own weight, Trump’s need for a new (real or perceived) enemy came in the form of a trade war against China, the only superpower that now threatens the US position as a global hegemon.

It all started in early 2018 when a team of government hackers from the Australian Signals Directorate had to evaluate the harmful potential of 5G. A powerful technology, 5G is able to allow users to move data up to 100 times faster than on current networks. It is a cornerstone of the upcoming Web 3.0 evolution that will integrate different devices, such as smart home appliance, driverless cars, and augmented reality (AR) devices.

However, all this new interconnectivity comes at a price: as the number of entry points in the network increases, the easier it is for a malicious group to gain access and conduct cyber warfare. Armed with all the offensive cyber tools they needed, the Australian cybersecurity forces had to test the damage they could inflict to a hypothetical target nation if they had access to malware and tools installed in the 5G network. The result was scary and unsurprising at the same time: exploiting 5G could expose the entire infrastructure of a country, providing a potential cybercriminal with countless opportunities for sabotage and espionage.

It all started in early 2018 when a team of government hackers from the Australian Signals Directorate had to evaluate the harmful potential of 5G. A powerful technology, 5G is able to allow users to move data up to 100 times faster than on current networks.

The former Australian Prime Minister Malcolm Turnbull shared this intelligence as well as his worries about the vulnerabilities of the 5G network with his country’s Five Eyes partners – New Zealand, the UK, Canada, and, obviously, the US. Among the members of this group, only the US one took this warning seriously enough, and decided to restrict the access the Chinese company Huawei, a world leader in 5G tech, had into their mobile networks. But President Trump took additional steps – at first, his administration threatened to withhold intelligence from any allied nation that allowed Huawei in. Later, on 15 May, the U.S. Department of Commerce blacklisted the Chinese telecommunications giant and other international firms. Now American companies need official permission before engaging in trade with them.

The consequences of the embargo and the beginning of the tech Cold War

After the sanctions were announced, Google responded by stopping all business activities with Huawei that involved the transfer of proprietary software, hardware, and services. The American digital company blocked Huawei’s access to Android, its Play Store, and other functions such as Maps, Search, and Gmail. Then Intel announced that it could no longer provide processors for Huawei laptops and for Qualcomm, the company that manufactures the wireless modems used in smartphones. Finally, it was the turn of ARM, a British chip designer that provides 95 per cent of the processors used in mobile devices in the world, who had to adhere to the embargo since it had many subsidiary companies based in the US.

It’s easy to understand how all these actions look and feel like a boycott that targets Huawei’s smartphone and laptop businesses directly rather than the alleged “national security risk” since 5G is completely irrelevant here. What’s the real game then? Global commercial dominance may be the reason behind these moves, since Huawei is one of the few global companies that have the numbers to compete, and even win, the technology race against the American hegemons Apple, Amazon, and Microsoft. Boycotting it now may serve a simple purpose: to save the planetary dominance of the US hyperpower by crushing its sole rival before it grows too strong. One way or the other, Google-less Huawei smartphones now represent the symbol of the new technology war between the two world giants – America and China.

Huawei replied that “restricting Huawei from doing business in the U.S. will not make the U.S. more secure or stronger”, explaining that this move is only forcing “customers in the U.S. to [purchase] inferior and more expensive alternatives.” The Chinese company had anticipated it could be the target of American whiplash and built up massive stocks of components. They also plan to launch a new operating system that is going to substitute Android before spring 2020 – the (allegedly) faster and more efficient Hongmeng. The new system will be fully compatible with all Android apps and functions, which were already banned in China.

Meanwhile, it is a known fact that the Chinese 5G giant is backed by Beijing, and Trump’s ban will not come without consequences. When Australia enforced a similar ban last year, its coal exports to China experienced all kind of disruption, including unnecessary delays at Chinese customs. And right now, the situation is especially delicate, as the trade war caused by the increased tariffs imposed by Trump on Chinese imports is just escalating. The tension is growing even among members of the Five Eyes, since the UK government doesn’t seem keen to removing Huawei from its networks. On the other hand, America’s choice to ban Huawei for national security reasons rather than admitting that it’s a commercial move to put China’s economy under pressure is a diplomatically dishonest move that damages the United States’ credibility with its peers. And while these nations are busy with their own in-fighting, a new global threat is emerging in the form of a Cold War that is fought with apps and smartphones rather than with soldiers and bombs.

How the tech war will impact Africa

The relationship between Huawei and the West is strained at best for reasons other than just commercial competition or superpower rivalries. And the conflict started before Trump’s trade war against China. The US Justice Department has been investigating links between Huawei and Iran that involved none else than Meng Wanzhou, the daughter of Huawei’s founder and the company’s chief financial officer at least since 2012. In January this year, US officials requested the Canadian government to detain and extradite Wanzhou for a variety of crimes ranging from bank and wire fraud to stealing trade secrets, and conspiracy to defraud the US.

Meanwhile, it is a known fact that the Chinese 5G giant is backed by Beijing, and Trump’s ban will not come without consequences. When Australia enforced a similar ban last year, its coal exports to China experienced all kind of disruption…

Recently, a red flag that points to how Huawei (or China for that matter) may have a darker secret agenda was raised in Ethiopia. In January 2018, the African Union found the computer systems in its headquarters in Addis Ababa compromised by a security breach that had apparently lasted for years. Investigators found that the computers, which were installed by Huawei, sent data every night from midnight to 2 in the morning to some servers in Shanghai. Both the African Union and Chinese officials denied the allegations, but many still question the reasons behind such a generous gift from the Chinese telecommunications giant. Google’s ban is going to have a very limited effect on the US market, where it holds a minor position on the mobile devices market. But in Africa, the situation is very different. Huawei’s influence in Africa is enormous, given the fact that it built at least 50 per cent of this continent’s 4G network, and it undoubtedly is the lead competitor in rolling out the upcoming 5G network.

Recently, a red flag that points to how Huawei (or China for that matter) may have a darker secret agenda was raised in Ethiopia. In January 2018, the African Union found the computer systems in its headquarters in Addis Ababa compromised by a security breach that had apparently lasted for years.

And that’s just the tip of the iceberg. A huge proportion of the African population is currently using Huawei smartphones, and the digital company has already provided the technology used for smart city projects, autonomous vehicles development, and research partnerships. For example, together with its partner Safaricom, Huawei signed a deal with the Kenyan government in April to build a $175 million data centre at the Konza technocity. China itself has provided well over 20 per cent of the total money lent to African governments between 2000 and 2017 ($143 billion of loans), and 80 per cent of this money came from the Chinese government rather than from private investors. But the bond between Africa and China does not just involve the past, but the future as well since the Chinese government has pledged to invest another $60 billion by the end of this year. Africa and the war between Google and Huawei

The most obvious and immediate consequence for the many African Huawei smartphone owners is that they will end up with an expensive device that has lost many of its key functions. The Chinese company is also one of the principal global partners of Android, which has substantially contributed to the development and growth of the popular operating system. If they develop a new system, the online world will be eventually split in two – a Chinese-led Internet on one side, and a non-Chinese one on the other side led by US companies. Once again, a huge technological barrier will be raised, and since Africa will stand in the middle, it’s hard to imagine that it will be able to stay neutral. The IT economy of way too many African countries requires them to work with Chinese companies and Huawei may exploit the current situation to change the game in its favour.

The African tech market is quite large, and if Huawei decides, it can be used to turn the tables against the Americans. The most likely scenario will see China and the US facing a potential battle for the control of global telecommunications. Africa can provide both of them with all the human, technological, and market resources to gain the edge they need against each other. If this wave is ridden correctly, the whole continent may attract the investments required to reduce the current digital divide.

African governments, however, must understand how to stand their grounds against the exploitation of the unsustainable burden of debt. They need to know how to exploit the added value Africa can provide to these two sides without becoming a pawn in this global war. But if the Africans play their cards correctly, this scary Cold War scenario may become a huge opportunity to bridge the gap with the Western world.

Published by the good folks at The Elephant.

The Elephant is a platform for engaging citizens to reflect, re-member and re-envision their society by interrogating the past, the present, to fashion a future.

Follow us on Twitter.

Gen Z, the Fourth Industrial Revolution, and African Universities

By Paul Tiyambe Zeleza Dateline: CHARLOTTESVILLE (VA), USA, August 11, 2017 – A gathering of self-identified “alt- right” protestors marches through a park in this small college city waving white supremacist and Nazi-affiliated flags, chanting slogans identified with “white power” movements and so-called “Great Replacement” beliefs put forth by Islamophobes (“you will not replace us”) and slogans identified with Nazi ideology (“blood and soil”). In the name of (white) American history, they are protesting the planned removal of a statue of the general who led the army of the Confederate States of America, the Southern separatist movement that took up arms against the American government in the country’s 19th century Civil War (1861-1865). Subsequent protests result in beatings of counter- protestors and one death. Days later, the President of the United States, Donald Trump, notoriously defends the white supremacists by observing that there were “very fine people on both sides.” The organiser of this “Unite the Right” is known in Charlottesville for his sustained online harassment campaigns against city councilors who support the removal of racist monuments.

Dateline: CHRISTCHURCH, New Zealand, March 15, 2019 – An Australian man living in New Zealand attacks worshippers at two different mosques in the city of Christchurch, killing 51 and wounding many others. He is a proponent of the Islamophobic, anti-immigrant views of a global “white power” network that disseminates its rhetoric of hate and its narrative of an imperiled white race online, via unregulated spaces within “the dark web” and via encrypted social media apps. His attack on Muslim New Zealanders is met with shock and grief within the country, an outpouring of solidarity that is expressed by Prime Minister Jacinda Ardern in her immediate response: “They were New Zealanders. They are us.” Of the shooter, whom she consistently refuses to identify by name, she says, “He is a terrorist. He is a criminal. He is an extremist…He may have sought notoriety, but we in New Zealand will give him nothing. Not even his name.” After the attacks, it becomes clear that he had been announcing his intentions in online forums and had been livestreaming the attack through a Facebook link. New Zealand moved swiftly to criminalise the viewing or sharing of the video of the attack.

Dateline: PARIS, France, May 15, 2019 – Two months after the Christchurch attack, New Zealand’s Prime Minister Jacinda Ardern stands at a lectern in a joint press conference with French President to announce a non-binding agreement dubbed “The Christchurch Call to Action.” The agreement has as its goal the global regulation of violent extremism on the Internet and in social media messaging. Ardern calls upon assembled representatives of Facebook, Google, and Twitter to lead the way towards an online world that is both free and harm-free by enforcing their existing standards and policies about violent and racist content, improving response times involved in removing such content when it is reported, removing accounts responsible for posting content that violates the platform’s standards, making transparent the algorithms that lead searchers to extremist content, and committing to verifiable and measurable reporting of their regulatory efforts. Affirming that the ability to access the Internet is a benefit for all, she also asserts that people experience serious harm when exposed to terrorist and extremist content online, and that we have a right to be shielded from violent hatred and abuse.

Why is this call different from all other calls?

What action can we expect in the wake of this call? And what consequences might plausibly flow from that action?

The Internet as a site of racist hate speech and vicious verbal abuse is not a revelation; in recent years, many culture-watchers and technology journalists have documented an increasingly bold and increasingly globalised “community” of white supremacists whose initial – sometimes accidental – radicalisation is reinforced in the echo chambers of this so-called “dark web”, the encrypted social messaging platforms that Ardern identifies as in need of regulation. (I put the word “community” in quotes here because the meaning derived from the word’s Latin root [munis/muneris: the word for gift] makes it a darkly ironic way to describe these bands of people: if community is a gift we share with each other, their gift of poisonous hate is one that damages all those with whom it is shared.

Recognising the danger of these groups, as Ardern does, and seeking to neutralise their effects on our online and in-person worlds is important, even urgent. As Syracuse University professor Whitney Phillips observes: “It’s not that one of our systems is broken; it’s not even that all of our systems are broken…It’s that all of our systems are working…towards the spread of polluted information and the undermining of democratic participation.”

The Internet as a site of racist hate speech and vicious verbal abuse is not a revelation; in recent years, many culture-watchers and technology journalists have documented an increasingly bold and increasingly globalised “community” of white supremacists whose initial – sometimes accidental – radicalisation is reinforced in the echo chambers of this so-called “dark web”…

The consequences of the way these systems are working are now as clear to New Zealanders in the wake of the Christchurch attacks as they have been to Americans, to Kenyans, to Pakistanis, and to Sri Lankans in the wake of their respective experiences of hate-fuelled terrorism. American Holocaust scholar Deborah Lipstadt reminds us that acts of violent hatred always begin with words, words that normalise and seek to justify the genocides, pogroms, and terror attacks to come. If we do not speak out against those words, she notes, we embolden the speakers in their drive to turn defamatory words into deadly actions.

So the action called for at Ardern and Macron’s Christchurch summit is warranted. Will it happen? Will the nations who have the ability to exert moral pressure on the companies that created and profit from these online platforms actually force a change in how white supremacist rhetoric is dealt with? Karen Kornbluh, a senior fellow at the German Marshall Fund, who is quoted in Audrey Wilson’s May 15 Foreign Policy Morning Brief, thinks that “the best case scenario [for] the Call to Action provides the political pressure and support for platforms to increase vigilance in enforcing their terms of service against violent white supremacist networks.” The problem with reliance on political pressure to change cultural policies driven by economic incentives and reinforced by jurisdictional divides is that when the pressure fades, the behaviour we want changed re-emerges. This has certainly been the case in prior efforts to alter Facebook’s inconsistent oversight of its users. Back in 2015, for instance, Germany’s then Federal Minister of Justice and Consumer Protection, Heiko Maas, filed a written complaint with Facebook about its practice of ignoring its own stated standards and policies for dealing with racist posts. Maas pointed out the speed with which Facebook removes photographs (like those posted by breast cancer and mastectomy survivors who seek to destigmatise their bodies) as violations of the platform’s community standards, and the corresponding inattention to user complaints about racist hate speech. A Foreign Policy analysis of Maas’s complaint letter reports that it led to an agreement between German officials and representatives of Facebook, Google, and Twitter – the very same companies who sent representatives to Ardern and Macron’s Christchurch summit –on a voluntary code of conduct that included a commitment to more timely removal of hate-filled content. That was in 2015; in Maas’s view, Facebook has subsequently failed to honour the agreement.

The problem with reliance on political pressure to change cultural policies driven by economic incentives and reinforced by jurisdictional divides is that when the pressure fades, the behaviour we want changed re-emerges. This has certainly been the case in prior efforts to alter Facebook’s inconsistent oversight of its users.

Even at the international/multi-national level at which Ardern’s call is framed, it is not clear how much capability there is to reform the discursive violence inflicted on us by white supremacist digital hate cultures. Audrey Wilson’s May 15 Foreign Policy Morning Brief reports that in the wake of his own visit to the Christchurch mosques that were the scene of white supremacist terror, UN Secretary-General António Guterres committed himself to combatting hate speech.

However, in a talk at the United Nations University in Shibuya (Tokyo) on March 26, 2019, Mike Smith, former Executive Director of the United Nations Counter-Terrorism Committee Executive Directorate, was pessimistic about the possibilities for monitoring sites on which people like the Christchurch killer engage in their mutual radicalisation. One could argue with some plausibility that the “soft power” of moral authority, widely acknowledged as one of the UN’s key strengths, should be used to speak out against hate and terror lest its silence on the matter foster a sense of impotence on the part of the international community. However, as Smith made clear, that level of monitoring on the part of international institutions (or national ones for that matter) is not feasible, even assuming there is no other claim on the resources that would be required. The only workable way to implement monitoring of online hate groups is for the tech companies to be doing it themselves and, as Ardern asked for in her Christchurch Call, to be reporting regularly on their efforts to international and national agencies.

What could possibly go wrong?

In considering the question of whether the Christchurch Call does, or can, mark the moment when the world begins to take white supremacist hate speech seriously, we need to consider what we are dealing with in that speech, in that “community”. One American think-piece published in the days following the Christchurch attacks observed that “[r]acism is America’s native form of fascism”, and I think it might be instructive to take that claim seriously. Frequently a carelessly-used and controversial epithet, fascism has been broadly defined as a political worldview in which some of a nation’s people have been given status as persons, as citizens, as lives that matter in a moral hierarchy, and others have had that status denied to them. Seeing racism as a variant of fascism gives us the resources to understand why online white supremacist hate speech is such an intractable problem. Essayist Natasha Lennard, a theorist of the Occupy movement that erupted in the United States in 2011, insists that “fascism is not a position that is reasoned into; it is a set of perverted desires and tendencies that cannot be broken with reason alone.” Instead, she argues that fascism—which she defines as “far-right, racist nationalism”—must be fought militantly: white supremacists must be exposed, and the inadequately regulated online spaces where their views are promulgated must be shut down. A similar no- tolerance approach to the more mainstream sympathiser sites where these views are legitimised is also warranted as part of anti-fascist (antifa) organising, she thinks. The goal of those who oppose fascism, racism, and white supremacy must be to vociferously reject these views as utterly unacceptable.

The kind of intransigent approach Lennard advocates is precisely the posture that the companies providing these online platforms are so ill-equipped and unwilling to adopt. As Foreign Policy writers Christina Larson and Bharath Ganesh both make clear, social media platforms like Facebook have long cloaked themselves in a rhetoric of utopian connectedness and free speech. Absence of regulation has been pitched to users as the precondition of popular empowerment.

Ganesh points out that there is a real disparity of treatment in the ways online platforms deal with extremist speech: where German minister Heiko Maas charged that Facebook censors photographs involving and leaves hate speech to flourish, Ganesh qualifies that only some speech is left unregulated. Extremist white supremacist hate speech is routinely ignored or approached with caution and with charitable concern for the poster’s rights of expression, but extremist jihadi speech is monitored, removed, and blocked. “There is a widespread consensus that the free speech implications of such shutdowns are dwarfed by the need to keep jihadi ideology out of the public sphere,” Ganesh explains. But, “right-wing extremism, white supremacy, and white nationalism…are defended on free speech grounds.”

In part, this is precisely because of the existence of more mainstream sympathiser sites (such as Breitbart, Fox, InfoWars) that ally themselves with right-wing politicians and voters, and defend white supremacists through “dog whistles” (key words and phrases that are meaningful to members of an in-group and innocuous to those on the outside), such that, as Ganesh puts it, this particular “digital hate culture…now exists in a gray area between legitimacy and extremism”. Fearing backlashes, howls of protest about censorship, and reduced revenue streams if users migrate out of their platforms, social media companies have consistently chosen to prioritise these users over the less powerful, less mobilised minority cultures who are undermined by digital hate.

Extremist white supremacist hate speech is routinely ignored or approached with caution and with charitable concern for the poster’s rights of expression, but extremist jihadi speech is monitored, removed, and blocked.

In light of this self-serving refusal to apply their own community standards even-handedly, what we are likely to see from social media platforms in response to the Christchurch Call is more legitimising of white supremacy rhetoric that is increasingly entering the mainstream of American discourse, and more policing of already marginalized viewpoints and voices. The most likely result is of their caretaking of this current situation is proliferation of the inconsistent censorship Ganesh identifies, and extension of that censorship to the very groups and users who might be calling out white supremacy. One example of this censorship of anti-racism predating the Christchurch Call involved a group of feminist activists calling themselves “Resisters,” who created an event page on Facebook to promote a 2018 anti-racism rally they planned for the anniversary of the Unite the Right hate rally in Charlottesville. Facebook removed the page on the grounds that it bore a resemblance to fake accounts they believed to be part of Russian disinformation efforts aimed at influencing the 2018 US mid-term elections.

What then must we do?

“The real problem is how to police digital hate culture as a whole and to develop the political consensus needed to disrupt it,” Ganesh tells us. In his view, the central question of this debate about online hate is: “Does the entitlement to free speech outweigh the harms that hateful speech and extreme ideologies cause on their targets?” That question is also posed in the Christchurch Call, and in abstraction it is a difficult one. People committed to freedom and to flourishing social worlds want both the right to express themselves and protections against the violence and dehumanisation that hate speech enacts.

Practically speaking, however, we often can draw lines that delineate hate speech from speech that needs to be protected by guarantees of right of expression (often, views from marginalised communities). Ganesh cites Section 130 of the German Criminal Code as an example: in free, democratic Germany, it is nonetheless a criminal offense to engage in anti-Semitic hate speech and Holocaust denial. The point of this legal prohibition is to disrupt efforts to attack the dignity of marginalised individuals and cultures, which is, Ganesh contends, “what digital hate culture is designed to do.” If our legal remedies begin – as the Christchurch Call asks all remedies to – with basic human rights and basic human dignity as their central concerns, they will not, he thinks, contravene our entitlement to express ourselves.

“The real problem is how to police digital hate culture as a whole and to develop the political consensus needed to disrupt it,” Ganesh tells us. In his view, the central question of this debate about online hate is: “Does the entitlement to free speech outweigh the harms that hateful speech and extreme ideologies cause on their targets?”

Those who fear that any attempt to delineate speech undeserving of protection will slide down a slippery slope into censorship often turn for support to nineteenth-century British philosopher John Stewart Mill’s impassioned argument for the necessity of robustly free speech in his 1859 work On Liberty. However, Mill’s motivation for that argument was his belief that freedom of expression is a key component of human dignity. Free speech does have limits, even for Mill; he articulates those limits in arguably his most famous contribution to Western political theory: the harm principle, which says that limits on an individual’s freedom are only justified to the extent that they prevent harm to others.

Recognising that words have the capacity to trigger action, Mill acknowledges that a society cannot tolerate as protected speech a polemic to an angry mob outside the house of a corn dealer in which one charges the corn dealer with profiteering at the expense of hungry children and calls for death to corn dealers. Building on this view that incitement to reasonably foreseeable harm or violence warrants restrictions on speech, even the United States, with its expansive constitutional protections for speech, has enshrined limitations. (One cannot yell “fire” in a crowded theatre, for instance.)

While laws – and responsible oversight by social media platforms, if ever that can be mandated in ways they will adhere to – can structure the playing field, they cannot determine the actions of the players. For that necessary change, we must look to our own behaviours and attitudes and how each of us might play our role in reinforcing social norms. In a post-Christchurch attacks interview, American anti-racist educator Tim Wise advises people: “Pick a side. Make sure that every person in your life knows what side that is. Make sure your neighbors know. Make sure the other parents where your kids go to school know what side you are on. Make sure your classmates know. Make sure that your family knows what side you are on. Come out and make it clear that fighting racism and fascism are central to everything that you believe.”

We must, I think, resist the temptation of the easy neoliberal “solution,” the fiction that small numbers of committed individuals can neutralise a normalised culture of hate. But there is a germ of insight in Wise’s prescription. Yes, we need a better legal climate, one that levies real penalties on social media platforms that fail to monitor the content they make available in our lives; yes, we need more responsible social media companies and Internet site moderators; and we also need to all do what we can to make sure that the people who are listening to each of us are hearing messages that contribute to a healthy and caring social world.

One thing I learned from the 2014 online frenzy of misogynist hate known as “GamerGate” (the campaign of invective and abuse organised against women in the video game industry) was that a small number of committed individuals can produce a normalised culture of hate. Another thing I learned was that many of the casual reproducers of that organised hate are not fully culpable actors; they have been drawn into something they think they understand but when they can be made to see how harmful it is, they will renounce it. I do think Natasha Lennard is right about the futility of trying to appeal to people who have chosen hate or fascism, but there are many others on the fringes who can be influenced away from those ideas. They need to be surrounded by people in their (online and offline) lives who are speaking the language of anti-racism, , multicultural inclusion, and the equal right to dignity of all human beings.

One thing I learned from the 2014 online frenzy of misogynist hate known as “GamerGate” was that a small number of committed individuals can produce a normalised culture of hate.

If online hate has IRL (in real life) ramifications, then IRL influencing might be a way to save or reclaim some otherwise radicalised young people, and also a way to assert pressure on the social media platforms to “walk their talk” of wanting a more connected community. The Christchurch Call cannot, in and of itself, drive out the poison of white supremacist hate. But it can, perhaps, inspire us to make our communities (the gifts we share with each other) gifts worth receiving.

Published by the good folks at The Elephant.

The Elephant is a platform for engaging citizens to reflect, re-member and re-envision their society by interrogating the past, the present, to fashion a future.

Follow us on Twitter.