Sign Interpreting: towards Professional Equality

Riccardo Moratto

National Taiwan Normal University (NTNU) Graduate Institute of Translation and Interpretation. GITI

Supervisor: Professor Jung-hsing Chang

This thesis is presented for the degree of

Doctor of Philosophy

Graduate Institute of Translation and Interpretation

GITI

National Taiwan Normal University (NTNU)

December 2012

i

ii TABLE OF CONTENTS

Table of Contents ……………………………………………………………….. III List of Tables, Figures and Appendices ………………………………………… VII

Abstract …………………………………………………………………………. VIII

Abstract in Chinese ……………………………………………………………... 8X Statement of Candidate …………………………………………………………. XIII Acknowledgments ………………………………………………………………. XIV

14

Chapter One Introduction

1.1 Introduction …………………………………………………………………. 1

1.2 Research Hypothesis ………………………………………………………... 5

1.3 Background and Rationale for the Study……………………………………. 6

1.4 General Method ……………………………………………………………... 8

1.5 The Anticipated Contribution of the Study …………………………………. 9

1.6 Organization of the Thesis ………………………………………………….. 10

Chapter Two Taiwan

2.1 Introduction …………………………………………………………………. 16

2.2 A Diachronic Analysis of (TSL): A Historical Excursus of TSL …………………………………………………………………. 18

2.3 Diatopic and diachronic variation …………………………………………... 22

2.4 A Historical Journey Towards Dignity ……………………………………… 29

2.4.1 Language “Evolution”: from Hands to Mouth …………………………. 29

2.5 ………………………………………………………………… 35

2.6 …………………………………………………. 38

2.7 …………………………………………………………………. 41

iii

2.8 ……………………………………………………………………… 44

2.9 Signed Chinese Vs. Natural Sign Language ………………………………... 45

2.10 Concluding remarks ……………………………………………………….. 49

Chapter Three TSL Interpreting

3.1 Introduction …………………………………………………………………. 51

3.2 TSL Interpreting History ……………………………………………………. 51

3.3 Status quo of TSL interpreters ……………………………………………… 59

3.4 Professional volunteers ……………………………………………………... 61

3.5 Conclusion ………………………………………………………………….. 63

Chapter Four Challenging areas in TSL Interpreting

4.1 Introduction …………………………………………………………………. 65

4.2 The Importance of Metaphors and Figurative Speech ……………………… 67

4.3 Diachronic Literature Review ………………………………………………. 70

4.4 Iconicity in Sign ………………………………………………… 77

4.5 Metaphors in Sign Languages ………………………………………………. 79

4.6 Examples from TSL ………………………………………………………… 81

4.7 Conclusion ………………………………………………………………….. 94

Chapter Five Experiments

5.1 Introduction …………………………………………………………………. 96

5.1.1 Sign languages are real languages: neurolinguistics evidence ………. 100

5.1.2 A review of neurolinguistics research in simultaneous interpreting … 1 14

5.2 Qualitative and quantitative experiments …………………………………… 1 29

5.2.1 Qualitative pilot study: quality assessment ………………………….. 129

5.2.2 Quantitative pilot study ……………………………………………… 139

5.2.2.1 Participants……………………………………………………. 139 iv 5.2.2.2 Materials ……………………………………………………… 139

5.2.2.3 Tasks ………………………………………………………….. 140

5.2.2.4 Results ………………………………………………………... 141

5.2.2.5 Discussion …………………………………… 144

5.3 Concluding remarks ………………………………………………………… 146

Chapter Six Assessment and Evaluation in TSL Interpreting

6.1 Introduction …………………………………………………………………. 151

6.2 Assessment and Evaluation Literature Review ……………………………... 156

6.3 The Issue of Interpreting Quality …………………………………………… 1 5 8

6.4 Taiwan Sign Language Interpreting Assessment and Evaluation (TSLI), with an Emphasis on the Naturality Issue ………………………………………. 165

6.4.1 EIPA …………………………………………………………………... 166

6.4.2 TSLIAE ……………………………………………………………….. 175

6.4.3 The Issue of “Naturality”: Natural Sign Language (NSL) vs. Manual Sign Language (MSL) …………………………………………………………... 185

6.5 Tentative New TSLIAE (nTSLIAE) Model ………………………………… 190

6.6 Conclusion and limitations of this chapter ………………………………….. 194

Chapter Seven Conclusion

7.1 A Review of the Chapters …………………………………………………… 19 9

7.2 Concluding Remarks and Future Research ……………………………...... 2 0 3

7.3 Limitations of the Study …………………………………………………….. 2 0 5

References ………………………………………………………………………. 2 0 7

Appendix I ………………………………………………………………………. 2 41

v

Appendix II ……………………………………………………………………... 2 51

Appendix III …………………………………………………………………….. 268

vi LIST OF TABLES, FIGURES AND APPENDICES

Table Number 1 ……………………………………………………………….. 2

Table Number 2 ……………………………………………………………….. 72

Table Number 3 ……………………………………………………………….. 1 4 3

Table Number 4 ……………………………………………………………..… 1 6 8

Table Number 5………………………………………………………………... 175

Table Number 6 ……………………………………………………………..… 177

Table Number 7 ………………………………………………………….……. 178

Table Number 8 …………………………………………………………..…… 180

Table Number 9 ……………………………………………………………….. 181

Table Number 10 ………………………………………………………….…... 182

Table Number 11 ……………………………………………………………… 183

Table Number 12 ……………………………………………………………… 191

Figure Number 1…………………………………………………………….…. 34

Figure Number 2…………………………………………………………….…. 36

Figure Number 3……………………………………………………………….. 98

Figure Number 4…………………………………………………………..…… 1 04

Figure Number 5……………………………………………………………….. 105

Figure Number 6………………………………………………………..……… 106

Figure Number 7………………………………………………………….……. 110

Figure Number 8………………………………………………………..……… 122

Figure Number 9……………………………………………………………….. 164

Appendix I ……………………………………………………………….……... 241

Appendix II …………………………………………………………………...… 251

Appendix III …………………………………………………………………..… 268

vii

ABSTRACT

This thesis was originally motivated from discussions with fellow Taiwan Sign

Language (TSL) interpreters, from insights into some government documents regulating the profession of sign language interpreters in Taiwan and from my interest as a conference interpreter to explore other fields of the same profession, which are yet to be explored from an academic point of view.

According to the TSL professional interpreters interviewed in the process of writing this thesis, their professional status seems to be treated differently from fellow oral interpreters. First of all, they are not paid by working day but rather by the hour and their retribution is considerably much lower than oral interpreters, for various reasons. This is due to budget issues but also to a deep-rooted attitude towards sign language interpreting, which had never before been explored in any publication in

Taiwan.

This study attempts at investigating general issues concerning the profession of sign language interpreters and focuses on some challenging areas, furthermore it provides a scientifically-based academic structure to recognize the equal professional status of sign interpreters and oral interpreters, not only in theoria but also de facto. The hypothesis underlying one of the chapters of the study, namely chapter six, is that if TSL is indeed a language and if the neurobiological efforts required to carry out the interpreting task, both oral and signed, are the same, then there is no reason for the two modally different categories of interpreters to be treated unequally.

In the thesis, there is a complete literature review of the most representative neurobiological studies aimed at proving that sign languages are natural languages at all effects. Furthermore, chapter five is dedicated to an experiment aimed at proving viii the intrinsic difficulty of sign language interpreting and the fact that the efforts underlying the modally different sign interpreting tasks are by no means inferior.

The thesis is divided into seven chapters. Each chapter is divided into different paragraphs and some paragraphs are further divided into subparagraphs.

The data gathered from this research, both in terms of literature review and in terms of experiments and interviews, will contribute to enhancing interpreters’ knowledge about their own profession and their professional figure. This study is also the first dissertation ever on Taiwan Sign Language (TSL) interpreting-related issues. There have been theses and publications on TSL, per se, but never on TSL interpreting. This is also one of the main contributions in a Graduate Institute of

Translation and Interpretation (GITI). We also hope that this study will spur the government and the ad hoc institutional bodies to recognize the fact that sign language interpreters should have the same rights as oral language interpreters, for instance the co-presence on stage of two colleagues shifting every twenty to thirty minutes, which is the ideal situation, yet not always the case for sign language interpreters.

The results of the study have implications for sign language interpreting field in regard to research, pedagogy and practice insofar as they raise the awareness of one’s own professional figure, with all the rights attached. This seems to be a crucial deontological factor in interpreting-related rights discussions.

Key words: Taiwan sign language (TSL), interpreter, professional status, equality, assessment.

ix

摘要

本文的撰寫動機來自於與臺灣手語傳譯人員之討論,亦來自於本人身爲口譯員對 翻譯學的高度興趣,進而以本身對於翻譯的專業認知來探討翻譯學的相關領域: 翻譯學有很多不同的類別,手語翻譯學為其中一種。據筆者所知,目前臺灣學術 界尚未出版任何與手語翻譯學有關之論文。 從筆者撰寫論文的過程當中所訪談過的專業手語傳譯人員得知,他們與口譯 翻譯人員的待遇並非相同。首先,手語傳譯人員的薪資不是以工作日而是以工時 來計算;再者,手語傳譯人員的薪資比口譯人員低的很多。這可能牽涉到經費的 問題,然而,台灣長久以來將手語傳譯者視為次等翻譯人員的這個情況則尚未在 任何文獻裡面被討論過。 本研究從不同方面來探討手語翻譯員所遇到的問題,並著重在幾個具有挑戰 性的領域。除此之外,本文提供以科學方法為根基的學術研究結構來分析並舉例 說明,手語翻譯員該享有口譯翻譯人員所享有的尊重以及專業上的平等對待--- 理論上與實際上都應如此。本論文第五章中的假設指出,若台灣手語是一種自然 的語言,以及神經語言學認為不同的語言系統之間的翻譯行為所需要用到的腦部 理解組織與解構的生物機能是相同的,則口譯與手譯的專業翻譯行為不應有差別 的待遇。 筆者試圖於本論文中囊括最具有代表性的神經語言學研究來證明手語確實是 自然的語言,並且在第五章中提出假設,試圖用神經語言學的實驗來證明,從事 手譯翻譯的過程所運用到的腦部解構與重組機能並非低於一般口譯的行爲。 本研究所蒐集的資料及數據,無論是在文獻綜述方面或是訪談方面都有助於 提高手譯員對自己的專業形象。此外,本研究也是第一篇關於臺灣手語翻譯學相 關研究的論文,其在台灣師大翻譯研究所亦可視爲主要貢獻之一。作者也希望政 府和特教機構正視本論文所提出的重要議題和建議,讓手語翻譯人員在各種條件 上享有與一般口譯員同樣的權利與待遇,例如每隔二十到三十分鐘有不同手語翻 譯人員輪替進行翻譯。 最後希望本論文對手語翻譯界的學術研究、教學領域與實務均能帶來具體的 影響,並有效地提升手語翻譯員專業的地位與形象。 x 關鍵詞‬: 臺灣手語 (TSL)、手語傳譯人員、專業身份、職業平等、翻譯品質評估。

xi

xii STATEMENT OF CANDIDATE

I certify that the work in this thesis entitled “Taiwan Sign Language Interpreting: towards Professional Equality” has not previously been submitted for a degree nor has it been submitted as part of requirements for a degree to any other university or institution other than Graduate Institute of Translation and Interpretation (GITI), National Taiwan Normal University (NTNU).

I also certify that the thesis is an original piece of research and it has been written by me. Any help and assistance that I have received in my research work and the preparation of the thesis itself have been appropriately acknowledged.

In addition, I certify that all information sources and literature used are indicated in the thesis.

Riccardo Moratto Student ID: 899250042 December 2012

xiii

ACKNOWLEDGMENTS

I would like to express my most profound feelings of gratitude to my supervisor,

Professor Chang Jung-hsing, who has always been very supportive and helpful every step of the way.

My sincere appreciation goes out to all those who have contributed to the realization of this thesis and to the oral defense committee members: James H-Y. Tai,

Shiaohui Chan, Antonella Tulli, and Tze-wei Chen.

I am infinitely indebted to all my Taiwanese and Italian Deaf friends who have, directly or indirectly, proved to be a priceless source of information. I thank all of you for your generous assistance, for your sense of humor, for your intelligence and for your direction. This would have truly been impossible without your help.

I am also sincerely indebted towards my fellow colleagues, professional Taiwan

Sign Language (TSL) interpreters. Given my background as an oral interpreter, I couldn’t have gone without their insights in the fruitful conversations that we have had in the past months. Thanks to each and every one of you, and most especially a heart-felt thank you to professional sign language interpreter Ginger Hsu for being available to always answer my questions, day and night.

I am extremely grateful to Professor Antonella Tulli for her constant support and to Shiaohui Chan, professor of neuro-linguistics, for her unconditional support and for her time in reviewing some of the chapters and her willingness to aid me in my exhaustive review of neurobiological studies.

xiv It seems opportune to thank all the international scholars who have contributed, direcly or indirectly, to enhancing the content of some the chapters herein included.

Last but not least, I would like to thank Professor Tai for his inspiring and enlightening advice during the oral exam. Notwithstanding everybody’s help, feedback and/or reviewing, I alone am responsible for any inaccuracies, omissions or shortcomings in the present dissertation.

Once again, I would like to sincerely acknowledge the help provided by the many interpreters and interpreter educators, both Taiwanese and overseas, especially in Italy, whose enthusiastic contribution made me gain further insight in sign language interpreting and in some of the issues discussed herein.

Finally, I am deeply grateful and indebted to my family and friends, who have been a milestone in this Ph.D. journey. Without their encouragement, love and support, I probably wouldn’t have made it till the end.

xv xvi CHAPTER ONE INTRODUCTION

1.1. Introduction

Taiwan Sign Language (TSL), abbreviated as TSL, is the language used amongst deaf communities in Taiwan. The origins of TSL developed from during Japanese rule, which is why TSL is considered part of the Japanese Sign

Language family and has no direct relations with (CSL), although there are loan words from CSL, as will be mentioned later. TSL has some with both Japanese Sign Language (JSL) and Korean Sign

Language (KSL); it has about a 60% lexical similarity with JSL (Fischer et al. 2010).1

There are many issues concerning TSL interpreters. Generally speaking, there are problems and inadequacies for TSL interpretation system and research on this issue is important because it aims at raising the dignity of TSL interpreters and the quality of the interpretation itself.

The Labor Affairs Department of the New City government regulates the services, requirements and retribution of sign language interpreters.2

1 For detailed descriptive information on TSL, the reader can refer to the relevant literature (Ann et al. (to appear), 2000, 2007; Brentari 2010; Chan and Wang, 2009; Chang 2009; Chang et al. 2005; Chang and Ke 2009; Chen and Tai 2009; Chen and Tai, 2009a, 2009b; Chiu et al. 2005; Duncan 2005; Huteson 2003; Jean 2005; Lee et al. 2001; Myers et al. 2005, 2006; Myers and Tsay 2004; Myers and Tai 2005; Sasaki 2007; Shih and Ting 1999; Smith 2005; Su and Tai 2007, 2006, 2009; Tai 2005, 2006, 2007, 2008; Tai and Tsay 2009, 2010; Tai and Chen 2010; Tsai and Myers 2009; Tsay 2007, 2010; Wilbur 1987; Zhang 2007). 2 The document in question can be freely downloaded from the following address: www.labor..gov.tw by inserting the words 提供手語翻譯及視力協助服務人員資格及補助 標準表. 1

For the benefit of the reader, here is the table which regulates the afore-mentioned

services.3

Table 1

Category Services provided Qualification Subsidy Standards General conferences or First type: if one of the 1. For those who compile classes: following qualifications with the first type, they 1.Conferences or symposia are met, documents can hourly subsidy is 1000 2.Work training be provided: NTD 4 , for the second 3.Complex interviews 1. Sign Language type the hourly subsidy involving technical Interpreting Certificate or is 500 NTD. operation or technical license for Sign 2. Individual cases applying testing Language Interpreter, for sign language 4.Other must have fulfilled 200 interpreting services, the hours of service. highest monthly subsidy 2. 200 hours of professional for every person is ten training approved or hours, and it cannot be subsidized by the over than 120 hours per TSL interpreting government and more than year. For any special 200 hours of experience as needs or requirements, Sign Language Interpreter. subsidy increases can be 3. More than 400 hours as considered. sign language interpreter.

4. In possession of technical certificate or license but the interpreting hours have not surpassed 200 hours. 1. Easy interviews Second type: if one of the following qualifications are met, documents can 2. Communication and be provided: counseling on the 1. 200 hours of professional workplace training approved or subsidized by the government and more than 100 hours of experience as Sign Language Interpreter.

3 The English translation is mine. The bold is also mine. 4 New Taiwanese Dollar.

2 2. 200 hours of sign language interpreting service.

Vocational training In line with the first and second Public and private training type qualifications institutions or units commissioned by the government to carry out vocational training recruit classes of Deaf and speech-impaired students. Each class will provide one sign language interpreter, whose daily retribution will be at the very most 1500 NTD.

After reading this table, I decided to interview my fellow sign language interpreters, to inquire on the reality of the market. For someone with a background in simultaneous conference (oral) interpreting, I actually discovered a different situation as far as sign language interpreters are concerned.

According to one of my sources, sign language interpreters, unlike oral interpreters, are paid by the hour and not per working day. She says that every sign language interpreter approximately receives, at the very most, 1600 NTD per hour

(anonymous interpreter A, personal communication 2012), which is line with the data presented in Table 1. In some rare occasions, sign language interpreters are not paid by the hour, according to the importance of the event. For instance, in the interview, she said that once she was paid 5000 NTD for a whole session (two hours) because the event was considered of the utmost importance, otherwise the pay is usually hourly, and most of the time it is only 1000 NTD per hour (interpreter A, 3 personal communication, 2012). At other times, low pay is not even the only problem because in different cities, most TSL interpreters cannot find the interpretation jobs easily even if they have the professional license; this is due to governmental budget restrictions.

When I heard this, I was quite surprised considering the fact that according to the official website of WASLI (World Association of Sign Language Interpreters), the International Association of Conference Interpreters (AIIC) has decided, by an overwhelming majority at the AIIC general assembly held in Buenos Aires in 2012, to open its doors to sign language conference interpreters, as a result of the close cooperation and fruitful discussions between AIIC and the World Association and the European Forum of Sign Language Interpreters. AIIC represents more than 3000 conference interpreters worldwide. On the other hand, WASLI and EFSLI promote the professional interests of sign language interpreters. The three associations share professional concerns such as ethics, advocacy, working conditions, and recognition, training and professional development. The main goal is to put sign languages on an equal footing with oral languages within the world of conference interpreting, including working hours, working condition and retribution, which is not respected yet in the world of TSL interpreting (Taiwan).

Simultaneous interpreting, irrespective of the modality is a very complex skill that requires intensive and appropriate training and practice. Successful interpreters rely on many skills in their everyday work. The development of these skills is not intuitive or automatic, nor is it modality-dependent. Simultaneous interpreting must be developed through a careful sequence of learning activities, which starts off from a perfect inter-lingual and intra-lingual command of both working languages.

Isolating specific skills and learning them one at a time is the best approach to learning complex new skills, which one at a time allows mastery of individual skills

4 and a feeling of success. Gaining control over components of the interpretation process can assist in developing simultaneous interpreting skills because appropriate practice helps to “routinize” and “automatize” these complex skills. The skills that make up the simultaneous interpreting processes are generally not used in isolation and must be synthesized correctly in order to render an interpretation.

These general observations are the same for oral language and signed language interpreting, because neurobiological mechanisms and efforts underlying these processes are modality independent.

Sign interpreters in different countries may be treated and paid differently, and there are also governmental budget restrictions to be taken into consideration; however, sign language interpreters should be treated au par with their fellow oral interpreters, all the more now that sign language has officially entered to be a part of

AIIC official languages.The afore-mentioned observations, along with the fruitful discussions with professional interpreters, will be further investigated in the course of the thesis along with other problems and inadequacies related to sign language interpreting in order to raise the dignity of professional sign language interpreters and the quality of the interpretation itself. The underlying hypothesis, rationale, organization and anticipated contribution of the research will be underlined in the following paragraphs.

1.2 Research hypotheses

This research study hypothesizes that the efforts which underlie bimodal interpreting, that is to say, oral to sign and sign to oral interpreting, are not inferior than unimodal interpreting, i.e. oral to oral interpreting.

Each chapter in this thesis addresses different aspects of TSL and TSL

5 interpreting. Broadly, it is hypothesized that the same neurobiological mechanisms activate during oral to oral and during oral to sign interpreting and therefore, Taiwan

Sign Language interpreters (TSLIs) should be treated equally as other oral interpreters, in terms of working conditions and retribution.

It will be possible to apply this information in governmental guidelines and regulations stipulating the deontological code and the general code of conduct of

Taiwan Sign Language interpreters (TSLIs).

1.3 Background and rationale for the study

The International Association of Conference Interpreters (AIIC - Association internationale des interprètes de conférence) has a very rigid code of ethics and a set of professional standards that interpreters should abide by.

AIIC liaises with a number of international organizations (e.g., the EU and the

United Nations) and negotiates the working conditions for all of their interpreters, including non-members. The goals of the AIIC are to secure acceptable working conditions for interpreters, to ensure professional interpretation, and to raise public awareness of the interpreting profession, including sign language interpreting which is being increasingly used in many fields. Frishberg (1986) reports that sign-language interpreters are called upon to interpret with increasing frequency in commercial settings, whether for employers and employees or for interlocutors who are on a more equal footing.

Given these premises, it seems opportune to raise the public awareness of the importance of interpreters irrespective of the modality. Some people might take the importance of interpreters as cultural mediators for granted, but in many fields, like the sign language interpreting, it is far from being so.

6 Many people have to strive to receive subsidies by the government on something which should be rightfully theirs, irrespective of the budget restrictions. Also, TSL interpreters are treated in different ways in different cities. This is an issue which deserves to be mentioned. For example, different cities might have different budget restrictions or even different pay (personal communication 2012); it also depends how close Deaf5 people are with signers, at times just for the sake of maintaining their personal relationship on good terms, and in the hope of having more interpreting tasks assigned in the future. Some signers might even decide to do their job for free.

It seems opportune to handle the issues related to sign interpreting the way they are herein presented because different points of view are needed, both the Deaf people’s point of view and that of interpreters’. From the perspective of the Deaf community, we will analyze issues directly linked with TSL, such as the diachronic variety or the diatopic differences. From the perspective of the interpreters, they should become more and more specialized, that is why the issue of quality and of performance evaluation seems to be crucial and will be further investigated in the present dissertation; this in turn will also raise the dignity of interpreters both from a professional point of view and from a behavioral-empirical point of view, as demonstrated in chapter five.

This research therefore aims to lay the groundwork for a scientifically-based academic discussion not only on the importance of sign language interpreters but also on their professional status which should be on the par with oral language interpreters, both in terms of working conditions and retribution, because according to Holly Mikkelson “analysis of the different types of interpreting shows that

5 Throughout the whole thesis, the word Deaf is capitalized whenever it refers to a specific, self-defined cultural group, with a common history and language. 7 regardless of the adjective preceding the word "interpreter," practitioners of this profession the world over perform the same service and should meet the same standards of competence. ”6

The issue of interpreting fees in the world of sign language has always been a vague and obscure issue. Even in the Code of Professional Conduct of the Registry of

Interpreters for the Deaf, which was established in 1964, under the tenet according to which interpreters are to maintain ethical business practices, we read that interpreters ought to charge fair and reasonable fees for the performance of interpreting services and arrange for payment in a professional and judicious manner, without further explicitating the issue and without operationalizing the definition of “fair and reasonable fees”.

However, this year, 2012, the AIIC has decided to open its doors to sign languages, as previously mentioned. This has finally set some clear-cut professional standards for interpreters to follow. In Taiwan, however, this does not seem to be the case, since as we will delineate in the course of the thesis, working conditions (like the mandatory presence of a co-worker) or the fact that interpreters should be paid by working days or half-days, or even the interpreting professional fees per se are far from abiding to AIIC international standards.

1.4 General method

This thesis is made up of research hypotheses and research questions which attempt at finding an answer both by qualitative and quantitative methods. By using qualitative methods, we aimed to gather an in-depth understanding of the phenomenon which is being studied, namely TSL interpreting.

6 http://aiic.net/page/3356 (accessed in Sept, 2012).

8 The samples of interpreters and Deaf people used was in line with the principle according to which in qualitative research smaller but focused samples are more often needed than large samples (Denzin and Lincoln 2005). At the same time, quantitative methods were used to seek empirical support for such research hypotheses.

Broadly speaking, a number of research methodologies were used, such as data collection, interviews, surveys, and experiments. As far as the subjects are concerned, we invited a total of ten participants to take part in the study:

Taiwanese-born Deaf people and professional sign language interpreters, native speakers of , and professional oral interpreters as the control group, five for each category. The participants were duly paid for their willingness to contribute in the research. The materials used in the replication of Gile’s experiment in the fifth chapter are different than the ones used in the original experiment to adapt them to the target language and culture. As for the tasks in the experiment, they will be outlined in detail in paragraph 1.6, i.e. “Organization of the

Thesis”.

1.5 The anticipated contribution of the study

This thesis is the first of its kind in Taiwan, insofar as it addresses interpreting issues related to TSL. In the past, there have been many theses on TSL, focusing on singular aspects such as TSL morphology, lexemes, semantics, syntax, etc.

However, to the author’s best knowledge, no one has ever focused on issues concerning TSL interpreting, which is however pressing considering the increasing need of the market. In other words, this is the first dissertation ever on Taiwan Sign

9

Language (TSL) Interpreting-related issues. There have been theses on TSL, per se, but never on TSL interpreting. This seems to be the main contribution a Graduate

Institute of Translation and Interpretation (GITI) can provide.

Furthermore, we hope that the academic nature of the present study will encourage the government to revise the regulations stipulating the retribution and working conditions of sign language interpreters, which according to our research hypotheses do not differ from uni-modal interpreting, i.e. oral to oral interpreting.

1.6 Organization of the thesis

The thesis is divided into seven chapters. The first chapter is a general introduction to the research questions, the hypotheses and the expected results. It is divided into six different paragraphs, namely introduction, research hypothesis, background and rationale for the study, the general method applied in the study, the anticipated contribution of the study and the organization of the thesis.

The body of the thesis is conceptually divided into two main parts. The first part is made up of chapter one and chapter two which focus on Taiwan Sign Language

(TSL), whilst chapters three through seven on TSL interpreting. The second chapter is an introduction to Taiwan Sign Language which has to be duly mentioned before talking about TSL interpreting issues. It can be perceived as a diachronic analysis of

TSL, and one if its paragraphs is subtitled a historical journey towards dignity, because it emphasizes the efforts the Deaf community, along with linguists and international scholars, have made to recognize the linguistic dignity of sign languages around the world. This chapter covers a historical excursus of TSL, a delving into TSL diatopic and diachronic variation, including an interview with the older generation of signers vs. the younger one, plus the discussion on how

10 interpreters deal with the lexical items with different signings when they are doing the interpretation. Therefore, it is important for the interpreters to have a linguistic background. For example, it is essential for the interpreter to be aware of the different geographic variations so that s/he can not only understand different forms of signing but also adapt his or her own signing according to the interlocutor’s geographic and social background. The remainder of the chapter is dedicated to issues such as the question of language “evolution” (from hands to mouth), cued speech, manually coded language, lip reading, oralism, and sign language vs. sign language, which are important and relevant to the present dissertation from the Deaf community point of view, as previously mentioned. These issues will be further emphasized in the TSL interpreting evaluation chapter by underlining the fact that sometimes the text that is used during the exams is Signed Chinese and not

Natural Sign Language. Therefore, they complicate the TSL interpreting evaluation process.

The second conceptual part of the thesis is more directly linked with interpreting issues. In the third chapter, the history of TSL interpreting is introduced. A corpus of

TSL interpreters have been surveyed to ensure whether the precarious and unprofessional conditions dictated by the government are indeed so. Under the hypothesis that indeed they are so, the rest of the research is fully aimed at proving my thesis, i.e. bimodal interpreters should share the same professional dignity as oral interpreters. The second paragraph of the third chapter is an analysis of TSL interpreting history. The fourth paragraph is titled “professional volunteers”. This title is a pun. It reflects the almost volunteering nature of TSL professional sign language interpreters nowadays, considering the straitened conditions in which they work and it is also a window of reflection on many other sectors, where professionals are really volunteers, which I have personally come in contact with. 11

The final part of the third chapter underlines the importance that is given to professional evaluation after many years of sign language interpreting history, not only in Taiwan but also abroad (cf. Malcolm Williams, 2004) and will be further emphasized in the chapter dedicated to the issue of TSL interpreting assessment and evaluation.

Chapter four further explores some challenging areas of TSL interpreting, namely challenging areas such as figurative speech and metaphors, which will have to be taken into consideration in the evaluation process. This chapter is aimed at proving that the efforts underlying sign language interpreting are at the basis of the necessity of turn-shifting on stage while interpreting at a sign language event.

Chapter five covers an exhaustive literature review of all the neurobiological studies that are a proof of the fact that TSL is indeed a natural language and not a human construct. Furthremore, this chapter reunites two experiments, namely the qualitative pilot study and the quantitative pilot study, the latter proves the complicated nature of TSL interpreting process. This will have to be taken into consideration in the evaluation process which is in the following chapter. This chapter is focused on the tightrope hypothesis experiment along with the review of two neurobiological studies concerning the bilingual brain in bimodals, which can be applied also to sign language interpreters, seen as bimodal bilinguals. In the present chapter, I will reduplicate Daniel Gile's Effort Model Tightrope Hypothesis

Experiment applied, this time, to TSL interpreting. According to Daniel Gile's Effort

Model Tightrope Hypothesis, the so-called ‘competition hypothesis’ can be represented in the following way, with the total processing capacity consumption.

TotC associated with interpreting at any time represented as a ‘sum’ (not in the pure arithmetic sense) of consumption for L(anguage), consumption for M(emory) and consumption for P(roduct), with further consumption for ‘coordination’ (C) between

12 the Efforts, that is, the management of capacity allocation between the Efforts:

(1) TotC = C(L) + C(M) + C(P) + C(C) and

(2) C(i) ≥ 0 i = L, M, P

(3) TotC ≥ C(i) i = L, M, P

(4) TotC ≥ C(i) + C(j) i,j = L, M, P and i different from j

(Where - equation (1) represents the total processing capacity consumption- inequality (2) means that each of the three Efforts requires some processing capacity.

Now, the idea that most of the time, interpreters, irrespective of the modality, work near saturation level is the so-called ‘tightrope hypothesis’, which this experiment aims to prove for sign language interpreters. This ‘tightrope hypothesis’ is crucial in explaining the high frequency of errors and omissions that can be observed in interpreting even when no particular technical or other difficulties can be identified in the source speech (Gile 1989). The precise aim of this investigation is to try to establish, in a sample of professionals interpreting a speech, whether there are indeed errors and omissions (e/o’s) affecting segments that present no evident intrinsic difficulty. If there are, it is likely that they can be explained in terms of processing capacity deficits such as predicted by the EM. The underlying rationale of this study is the following:

One indication of the existence of such e/o’s would be the variability in the segments affected in the sample (at the level of words or propositions).

If all subjects in the sample fail to reproduce adequately the same ideas or pieces of information, this would suggest the existence of an intrinsic ‘interpreting difficulty’ of the relevant segments (too specialized, poorly pronounced, delivered 13 too rapidly, too difficult to render in the target language, etc.) Another indication could come from an exercise in which each subject is asked to interpret the same speech twice in a row. Having become familiar with the source speech during their first interpretation, subjects can be expected to correct in their second version many e/o’s committed in their first version. If, notwithstanding this general improvement of interpreting performance from the first to the second target-language version, it were possible to find new e/o’s in the second version whereas the same speech segments were interpreted correctly the first time, this would be an even stronger indication that processing capacity deficits are involved. The method used will be the same used by Gile, namely target speeches will be videotaped, transcribed, and transcriptions will be scanned for errors and omissions. This method is not without pitfalls, both because of high inter-rater variability in the perception of what is and what is not an error or omission, so to avoid these pitfalls, only instances of what appeared to me as flagrant errors or omissions will be included in the analysis, and at least two further opinions from other sign language interpreters will be requested to confirm that the e/o’s I identified were also considered e/o’s by them, so to preserve validity by reducing the probability of ‘false positives’ (mistaking text manipulations considered acceptable by the subjects for e/o’s). The analysis then will proceed by trying to determine: (a) how many subjects in the sample made an e/o for each affected speech segment, and (b) what e/o’s were corrected in the second version of the target speech.

Therefore, without recurring to fMRI or other neurolinguist technicques, the high detection threshold for e/o definition used here in order to reduce to the largest possible extent the number of ‘false positives’ means that other phenomena that could have been used to measure cognitive load were not exploited. In particular, no

14 attempt will be made to look at borderline cases, at the deterioration of linguistic output quality. If the low sensitivity of the tool will make it impossible to obtain convincing findings, more sensitive tools will have had to be designed, and reliability could have become a problem. The examples will be provided in the relevant chapters.

The orginal idea was supposed to strengthen the case for the tightrope hypothesis and thus give some support to the Effort Models as a conceptual tool to explain not only oral interpreters’ cognitive-constraints-based limitations but also TSL interpreters, and in Gile's words may give some credibility to the idea that the usefulness of a concept or model in scientific exploration is not necessarily a function of its degree of sophistication. However, the findings of this study are very interesting because they do not necessarily and incofutably prove that the efforts of bimodal interpreting is superior to unimodal interpreting but they do prove the intrinsic difficulty of sign language interpreting. The due explanations will be provided in the relevant chapter.

Chapter six is focused on the issues of assessment and evaluation parameters in

Taiwan Sign Language Interpreting (TSLI), with an emphasis on the naturality issue.

I want to propose how TSL interpreting should be assessed and evaluated, based on interpreting challenges, the experiments carried out and the other reflections.

The seventh and final chapter is a conclusion, divided in the following parts: a review of the chapters, some final recommendations and further research suggestions along with some concluding remarks and an emphasis on the limitations of the study.

15

CHAPTER TWO Taiwan Sign Language

2.1 Introduction

Language is at the basis of human communication. Languages may be defined as natural outputs of socially constructed codes. According to emeritus professor of linguistics at Oxford University Roy Harris (Harris 1988), linguistics has taught us that language is no longer regarded as peripheral to our grasp of the world we live in, but as central to it. Words are not mere vocal labels or communicational adjuncts superimposed upon an already given order of things. They are collective products of social interaction, essential instruments through which human beings constitute and articulate their world. This typical twentieth-century view of language has profoundly influenced developments throughout the whole range of human sciences.

It is particularly marked in linguistics, philosophy, psychology, sociology and anthropology.

Irrespective of their modality, languages develop naturally within a community of users. Therefore, the notion of language, which in the past strictly referred only to oral languages, has duly been extended also to sign languages.

Sign languages emerge spontaneously among their user, that’s why it is incorrect to perceive sign languages as oral languages spelled out in gestures, or to talk about the hearing pioneers in the education of the Deaf, like , as the “inventor” of sign languages, which are independent of oral languages and follow their own paths of development. This is also proven by the fact that British

Sign Language (BSL) and (ASL) are mutually

16 unintelligible for historic reasons, and that ASL is much closer to French Sign

Language (FSL or LSF in French) or that its syntax resembles more modern oral

Japanese than spoken English. Taiwan Sign Language (TSL), which is the object of our research, is very similar to JSL, because its origins developed from it during

Japanese rule in Taiwan.

According to Fischer et al. (2010), TSL has some mutual intelligibility with both

JSL and (KSL) and it has a 60 % lexical similarity with JSL.

The reason underlying the lexical similarity with KSL is that Korea was also occupied by Japan from 1910 to 1945.

This serves to say that the development of sign languages is separate from that of oral languages. Some countries, like South Africa for example, with up to eleven official languages, only have one official sign language with maybe a couple of variants (anonymous interpreter B, personal communication, 2012).

Natural languages constantly change. Their phonetic, morphological, semantic, syntactic and other features of language may vary over time. Here, we will focus on the diachronic development, along with its changes, of TSL.

According to Smith (2005), TSL is used by approximately thirty thousand signers on the island of Taiwan and although its lexicon and syntax closely resemble

JSL and KSL, as previously mentioned, in the last few decades it has been influenced by Chinese Sign Language (CSL) and by Hong Kong Sign Langue

(HKSL) because of the so-called phenomenon.

Language contact occurs when two or more languages or varieties interact.

According to Hadzibeganovic, et al. (2008), language contact can occur at language borders, between adstratum languages7, or as the result of migration, with an

7 An adstratum or adstrate (plural: adstrata or adstrates) refers to a language which is equal in prestige to another. 17 intrusive language acting as either a superstratum or a substratum.8

2.2 A diachronic analysis of Taiwan Sign Language (TSL): a historical excursus of TSL

The earliest information regarding TSL all date back to 1895-1945, which is when the first schools for the Deaf were founded, during the Japanese rule. Before that, there must have been a local variety of TSL that indigenous Deaf people used back then, but unfortunately not much is known of the pre-Japanese occupation period.

The only remnants of this earlier variety can be found in some city names, like in the signs for and KAOHSIUNG. 9 The sign TAINAN appears as a combination of the signs TAIWAN and PLACE. As Smith (2005:2) explains

“originally, the name Taiwan referred only to the environs of the present-day city of

Tainan, which literally means ‘Tai(wan)-south’. The name of the city was changed to

Tainan in the 1800s, before the start of the Japanese occupation, so the sign may be a holdover from the signs of pre-occupation Taiwan”. Another example is the name of the city Kaohsiung which is a blend of the signs for DOG and HARBOR, because the city was then known with the name of Dakau (strike the dog). The aboriginal people used to call the city with the name of Takau. Later, the Japanese maintained the pronunciation but changed the characters, in other words by way of a lexical borrowing based on the sound in Japanese it became TAKA O, which was then transcribed by the Japanese with the characters 高雄, which in Mandarin Chinese are read as Kaohsiung, or Gāoxióng in pinyin10.

8 When one language succeeds another, the former is termed the superstratum and the latter the substratum. 9 Signs are always capitalized. 10 The official system to transcribe Chinese characters into in the People's Republic of

18 During the Japanese rule, one school was founded in Taipei and the other in

Tainan and there were not many exchanges between the two. This caused the development of two different topical varieties of TSL, which will be emphasized in the next paragraph.

For historic and political reasons, after World War II, the two schools began to communicate more because they both came under the jurisdiction of the provincial government of Taiwan (Smith 2005). However, during the Japanese rule, teachers were mainly Japanese, who were invited to come from Japan and teach in the schools in Taiwan. Most of teachers at the Taipei school came from Tokyo and the ones in the Tainan area were from Osaka and they brought along the diatopic differences intrinsically present within the JSL.

Later, in 1945, when Taiwan was turned over to the Republic of China, instruction in Mandarin began and the phenomenon of language contact began to sow its seeds. However, although most of the Japanese teachers were sent back home, some of the Taiwanese teachers remained in the two schools, along with the signs they had learned to use during the Japanese rule. This is important because the new generation of teachers, Mandarin-speaking, were instructed in JSL.

Wensheng Lin was a deaf man educated in Tokyo, he became the new principal of the school for the Deaf in Taipei and “he passed on Taipei’s Tokyo signs to a new flock of Chinese teachers” (Smith 2005:3). The same thing happened in Tainan, where Tiantian Chen started training teachers in the Osaka signs that his school had been using thus far.

In the late forties, people started to migrate from to Taiwan to take refuge from the Communist Party. Amongst these refugees, there were several

China. 19 deaf people who were former teachers in the schools for the Deaf of Nanjing and

Shanghai. Some of them were even hired to teach at the Taipei school for the Deaf, like Wang Zhenyin who was a deaf man from Nantong, Jiangsu and who started working at the Taipei school for the deaf in 1948, bringing along signs from CSL.

However, as Smith (2005) duly points out, other CSL signs may have been introduced into TSL through another channel, namely by graduates of the Private

Chiying Elementary School for the Deaf and Mute in Kaohsiung which originally used a of CSL. The principal was from Nantong and he used a dialect of CSL.

Therefore, when he established his school in Taiwan in 1950, he brought along his own idiolect.

In the years, signers have coined new signs for scientific or academic purposes, to meet the instruction’s demands. It is also possible that some signers don’t sign well, for a plethora of reasons, maybe because they were not raised with sign language or because they did not receive a good instruction. It is interesting to see what interpreters do in these situations. Usually, no matter how good the performance and the interpretation skills are, an interpreter should always adjust his or her signing to the interlocutor, in other words interpreters provide a service and their ultimate goal if that the massage be conveyed. For this reason, it is important that interpreters talk with their deaf interlocutor before any interpretation task to understand what kind of signing the Deaf person is used to and also his or her linguistic level, because it would be useless to sign either too fast or too complicated if the Deaf person does not understand, the purpose of the service would fail.

Another influential figure worth mentioning is Fang Bingmei, who was a graduate of the Nanjing School for the Blind and Mute. At first she was sent to work in the Tainan school but later transferred to Taipei. This means that not only did she bring her own CSL idiolect to Tainan, but also contributed to the signs exchange

20 between the two main diatopical varieties of TSL in Taiwan, which will be delved into in the next paragraph, along with the issue of diachronic variation (Smith 2005).

21

2.3 Diatopic and diachronic variation

In linguistics, variation is the term used to refer to the appearance of lexical units in different forms and is a phenomenon that exists in all languages, both oral and signed.

There are four different types of variation:

diatopic variation, which is variation according to place or geographical

variation, for example, the Taipei school vs. the Tainan school.

diachronic variation, or variation through time, also called historic variation. In

other words, how a language changes in time.

diastratic variation, or variation according to social class or to the social group

to which a speaker feels they belong. In diglossic situations 11, diastratic

variation often appears in the transition from the formal or higher level, to the

socially more informal levels, like in the case of creoles.

finally, diaphasic or ‘stylistic’ variation, or even individual variation (idiolect).

This is more difficult to characterize clearly especially for those creoles that

lack sharp description. Once again though, it would be necessary to carry out

surveys in these cases to confirm that geographical or sociological factors are

11 In linguistics the term diglossia refers to a situation in which two or usually closely related languages are used by a single language community. In addition to the community's everyday or vernacular language variety (labeled "L" or "low" variety), a second, highly codified variety (labeled "H" or "high") is used in certain situations such as literature, formal education, or other specific settings, but not used for ordinary conversation. In other words, DIGLOSSIA is a relatively stable language situation in which, in addition to the primary dialects of the language (which may include a standard or regional standards), there is a very divergent, highly codified (often grammatically more complex) superposed variety, the vehicle of a large and respected body of written literature, either of an earlier period or in another speech community, which is learned largely by formal education and is used for most written and formal spoken purposes but is not used by any section of the community for ordinary conversation (Ferguson 1959).

22 not contributing to one or other of these choices; this is known as variation

analysis.

Obviously, this internal variation in a language or dialect, sometimes called intralinguistic variation, should not be confused with interlinguistic variation. In the present paragraph, we are going to closely examine the diatopic and diachronic variation within TSL.

TSL can be divided into two main varieties, one centered on the Tainan school, which we could call the southern variety, and one centered on the Taipei school, which we are going to call the northern variety. As previously mentioned, the first school for the Deaf in Taiwan was established in 1915, in Tainan, and the second school in Taipei, two years later, in 1917 (Smith 2005).

In the years of the Japanese rule, namely 1895-1945, there was not much communication going on between the two schools, which increased the crystallization of the two varieties. Later, after World War II, they both came under the jurisdiction of the provincial government of Taiwan. In this period, the two varieties started to enter in contact with each other.

There is actually also a third variety which can be distinguished within TSL, which is the one centered on the Taichung school, however the sign language used by this school was essentially the same as the one used in the Tainan school (Fischer

2010).

As for the diachronic variation, it is the variation through time, also called historic variation. In other words, how TSL has changed over time. To try to answer the question of how TSL individual use has been changing over time, we have carried out a behavorial study-experiment. We have recruited six deaf people and divided them into two different groups according to their age range. The elderly 23 group is made up of people whose ages range from 70 to 80 years. The younger group of people had an age which is up to 35 years of age, to ensure two completely different generations of signers. In recruiting these signers, we have been very careful in eliminating any independent variable which might influence the results, so we tried to control the variables related to diatopic, diastratic and diaphasic variation, by choosing people coming from the same socio-geographical background and with the same level of education. We asked our participants, who were all duly paid for their availability, to start signing to each other, as if we were not there. After about thirty minutes, which is a reasonable amount of time to eliminate all differences due to the signers’ idiolects, we proceeded with our interviews aimed at inquiring on the perceived differences in sign language use that signers have perceived in their counterpart. According to the results, most differences were at the semantic level and at the words choice. Although the gist of communication was not ruined, jeopardized or compromised, it was interesting to see how the elderly had more problems understanding some of the signs used by the younger generation, probably because of a lack of exposure. The elderly generation has been living in a linguistic shell compared to the younger generation, which directly or indirectly has entered in contact with a plethora of variety of signs. It was interesting to see that, in line with oral language, inter-generational changes regard not so much syntax, which takes longer periods of time to change, but lexemes, which are influenced by TV, the new media and by a form of hybridization. The most interesting aspect which emerged was the fact that the younger generation is used to chatting with foreign Deaf people by using Skype, via a webcam. Foreign Deaf people can’t obviously use TSL and

Taiwanese Deaf people can’t necessarily use the sign language used in the country of their friends. So, usually what they do is they recur to the so-called International

Sign Language (ISL) to communicate. Some ISL signs have already permeated

24 Taiwanese young deaf people’s slang, whilst they are still perceived as foreign signs by the elderly.

After analyzing some of the differences due to the generation gap, we tried to investigate how sign language interpreters cope with lexical variety. Different people may sign the same lexeme differently according to different factors, namely their age, their education, the social and geographical extraction.

A sign language interpreter, just like oral interpreters, is a professional and trained figure who knows most of the varieties of the same sign. It is inevitable, though, that on the spot some signs may either be forgotten or never previously encountered.

For this reason, we interviewed three different interpreters to inquire on the different strategies used on the field to cope with lexical variety. In other words, how professional interpreters, like the ones we interviewed, deal with the lexical items with different signings when they are doing the interpretation.

The results of all three interviews with professional interpreters can be summed up as follows.

Q: When you are carrying out the sign interpreting task, it is inevitable to

encounter people who are used to signing with different signs compared to the

ones used by the interpreter. This may be due to geographical reasons

(northern variety vs. southern variety), generation gap (younger people vs. the

elderly), etc. Generally speaking, how do you handle these lexical differences?

(It goes without saying that if the interpreter knows the variant used by the

signer, then no problem will arise. What we are interested in is to find out what

strategies interpreters actually use when they have never encountered that

variant before.) 25

Interpreter A: Well, things are much easier if the interpreter actually knows

what a given sign means. If that is not the case, the interpreter should always

accommodate the Deaf interlocutor, meaning s/he should always use the sign

the Deaf interlocutor is more inclined to recognize or more accustomed to

using, use the one he is used to signing. However, if the Deaf person uses a

sign the interpreter has never seen before, the most ideal strategy would be to

interact directly with your interlocutor and ask him or her to repeat or what

that given sign means. After making sure there’s no lexical discrepancy, the

interpreter should keep on using the sign the Deaf person is accustomed to

using. If it is a context where it is basically impossible to interact with the

Deaf signer, for example an international conference where the Deaf

participant is signing on stage, then things might be a little more complicated.

It can be summed up by saying that if the Deaf interlocutor does not

understand, the interpreter should use the sign/s the Deaf participant is

accustomed to, if it is the interpreter who doesn’t understand, the best thing is

to ask directly and if it is not possible to ask the meaning should be inferred by

the context.

Usually, when Deaf people are onstage and sign to an audience of equally

Deaf participants, they sign at a supersonic speed, so it is inevitable for the

interpreter to miss out on something. Most of the times, the signer would take

a look at the interpreter and see if s/he needs to slow down or not. If it is a

lecture, it is mandatory for the interpreter to require the script of the lecture

before the beginning of it, or if it is not available, the interpreter should at

least communicate a bit with the Deaf signer to get used to his or her way of

signing. Otherwise, it is a very risky situation, most sign interpreters would

26 never dare translate at a conference where they have to interpret a signer they

have never seen or met before, because sign to oral interpreting is a very

arduous task, and the diatopic differences of TSL are many, which means that

it is very normal to encounter lexical varieties interpreters do not know how to

sign. The important thing is to understand the gist, the core of the message. If

that goes lost, the best thing is not to translate. Interpreters can always ask

Deaf signers to sign more slowly, to repeat or honestly saying that they do not

understand, which means that usually in sign interpreting there is more

interaction amongst the participants then in oral interpreting. What Deaf

people are mostly scared of is to find interpreters who translate according to

their own mind because they do not have the courage to say that they have

missed out on something. There is not a single interpreter who would have the

courage to say that s/he perfectly understands each and every sign the Deaf

participant signs on stage for all the factors listed so far. The main point is not

to digress or to make up the whole speech, in a holistic-pragmatic approach.

Interpreters should practice their interpreting skills, but also their guts. In this

respect, it does not differ from oral interpreting and especially interpreters

shouldn’t think that no one is ever going to find out about their mistakes,

because this kind of behavior is not permitted by the deontological code of

interpreters.

2.4 A historical journey towards dignity

Sign languages are languages at all effects, as proven by a plethora of neurobiological studies. This statement almost sounds like a bromide. However, in the history of the development of sign languages around the world, it has taken a lot 27 of efforts by linguists and neurobiologists to give them their well-deserved status and dignity, and still in some countries this is not the case. For example, although the international scientific community has amply proven that sign languages are au pair with oral language, TSL interpreters in Taiwan do not share the same status as the other interpreters, as illustrated in the first chapter. The main problem is that we must consider who pays for interpreters of spoken languages and who pays for interpreters of sign languages. In the first case, usually is some big private enterprise calling a conference to discuss certain issues, whereas sign language interpreters are paid directly by the Deaf person who needs the service by way of applying for a financing with the government, which in turn has budget restrictions.

In some countries, sign languages have obtained some form of legal recognition, while others have no status at all. Batterbury (2012) has argued that sign languages should be recognized and supported not merely as an accommodation for the disabled, but as the communication medium of language communities.

Sign languages, as natural languages, have always existed, as long as they were needed in a given community of deaf users anywhere on the planet. Sign languages are not languages constructed around a table by a group of linguists to enhance Deaf people’s lives. One of the earliest written records of a sign language occurred in the fifth century BC, in Plato's Cratylus, where Socrates says: "If we hadn't a voice or a tongue, and wanted to express things to one another, wouldn't we try to make signs by moving our hands, head, and the rest of our body, just as dumb people do at present?" (Bauman 2008).

The first attempts to educate Deaf people according to oral language teaching methods dates back in 1620 when published Reducción de las letras y arte para enseñar a hablar a los mudos (‘Reduction of letters and art for teaching mute people to speak’) in Madrid. It is considered the first modern treatise

28 of Phonetics and Logopedia, setting out a method of oral education for the deaf people by means of the use of manual signs, in form of a manual alphabet to improve the communication of the mute or deaf people.

In the 18th century, the notorious Charles-Michel de l'Épée published his manual alphabet, which is still the same in France and North America. In 1755, he founded the first school for deaf children in Paris. Later, one of his most famous graduates;

Laurent Clerc went to the with Thomas Hopkins Gallaudet to found the American School for the Deaf in Hartford, Connecticut, in 1817 (Canlas 2006).

Gallaudet's son, Edward Miner Gallaudet, founded a school for the Deaf in 1857 in Washington, D.C., which in 1864 became the National Deaf-Mute College. Now called Gallaudet University, it is still the only liberal arts university for deaf people in the world.

However, the historical journey towards a self-proclaimed linguistic dignity has been long and tortuous. In the next paragraph, we will delve further into the issue.

2.4.1 Language “evolution”: from hands to mouth

Linguists and scientists have focused for centuries on the issue of the origin of language, without coming to any consensus on its ultimate origin or age. According to some scientists, the emergence of new sign languages in modern times —

Nicaraguan Sign Language, for example — might potentially offer insights into the developmental stages and creative processes necessarily involved (Kegl et al. 1998).

There are hypotheses and theories regarding the origin of language. Here, we will focus on the so-called gestural theory of language origins.

The gestural theory states that human language developed from gestures, in other words from the hands, that were used for simple communication. 29

According to Premack and Premack (2000), two types of evidence support this theory.

(a) Gestural language and vocal language depend on similar neural systems. The

regions on the cortex that are responsible for mouth and hand movements

border each other.

(b) Nonhuman primates can use gestures or symbols for at least primitive

communication, and some of their gestures resemble those of humans, such

as the "begging posture", with the hands stretched out, which humans share

with chimpanzees.

A notorious example is the gorilla Koko who, according to her long time personal trainer Francine "Penny" Patterson, is able to understand more than 1,000 signs based on American Sign Language and understand approximately 2,000 words of spoken English (Wise 2003).

Research has found strong support for the idea that verbal languages and sign languages depend on similar neural structures. Patients who used sign language, and who suffered from a left-hemisphere lesion, showed the same disorders with their sign language as vocal patients did with their oral language (Kimura 1993). Other neurobiology researchers found that the same left-hemisphere brain regions were active during sign language as during the use of vocal or written language (Newman et al. 2002). The problem is that some people have used these theories to talk about a form of evolution, degrading sign(ed) languages to an inferior grade compared to oral ones.

According to the gestural theory of language origin, at some point in the past there was a shift to vocalization. Some of the explanations provided by the gestural theory are as follows:

30 (a) Our ancestors started to use more and more tools, meaning that their hands

were occupied and could no longer be used for gesturing (Corballis 2002).

(b) Manual gesturing requires that speakers and listeners be visible to one

another. In many situations, they might need to communicate, even without

visual contact—for example after nightfall or when foliage obstructs

visibility.

(c) A composite hypothesis holds that early language took the form of part

gestural and part vocal mimesis (imitative 'song-and-dance'), combining

modalities because all signals (like those of apes and monkeys) still needed

to be costly in order to be intrinsically convincing. In that event, each

multi-media display would have needed not just to disambiguate an intended

meaning but also to inspire confidence in the signal's reliability. According to

Knight (2008), the suggestion is that only once community-wide contractual

understandings had come into force could trust in communicative intentions

be automatically assumed, at last allowing Homo sapiens to shift to an

ultra-efficient, high-speed — digital as opposed to analog — default format.

Since vocal distinctive features (sound contrasts) are ideal for this purpose, it

was only at this point — when intrinsically persuasive body-language was no

longer required to convey each message — that the decisive shift from

manual gesture to our current primary reliance on digitally encoded spoken

language occurred (Knight 1998, 200, 2008b).

Humans still use hand and facial gestures when they speak, especially when people meet who have no language in common (Kolb and Whishaw 2003). There are also, of course, a great number of sign languages still in existence, commonly

31 associated with Deaf communities; it is important to note that these sign languages are equal in complexity, sophistication, and expressive power, to any oral language—the cognitive functions are similar and the parts of the brain used are also similar. The main difference is that the "" are produced on the outside of the body, articulated with hands, body, and facial expression, rather than inside the body articulated with tongue, teeth, lips, and breathing.

The gestural theory of language origin has evidence in neurobiology thanks to the findings of mirror neurons, which are activated during the execution of actions whether to execute the actions exactly, to imitate the actions, or just to observe the actions, but are never activated for mere observational purposes. Hence, mirror neurons represent a mechanism capable of coupling the execution and observation of actions. At the same time, mirror neurons are activated selectively only to transitive movements, like the interaction with an object, such as grasping, holding, manipulation, and releasing (Gallese, Fadiga, Fogassi and Rizzolatti 1996; Rizzolatti and Arbib 1998).

Kohler also demonstrated that mirror neurons can represent the same action according to different modalities (Kohler, et al 2002). It implies that mirror neurons can represent the intended purpose and the meaning of an action, independently of the fact that an animal has directly executed an action, or has had simply seen or heard it (Chiu 2006).

Mirror neurons in monkeys and the homologue Broca’s area in humans provide a neurobiological linkage for the hypothesis that communication based on manual gestures preceded speech in language evolution (Rizzolatti, et al 1998).

Mirror-neuron systems in humans can respond to pantomimes (Buccino, et al 2001;

Grezes, et al 2003) and to intransitive actions (Fadiga, et al 1995; Maeda, et al 2002).

From transitive and intransitive actions to pantomimes, the signals are becoming

32 more and more abstract12.

The concept of evolution, however, should not make us misbelieve that there is a progression of quality from sign languages to oral languages. As previously mentioned, sign languages are equal in complexity, sophistication, and expressive power, to any oral language—the cognitive functions are similar and the parts of the brain used are similar. This should not be taken for granted, because for many centuries, people have tried, and in some places are still trying, to indoctrinate the

Deaf community, according to the cultural values of the oral speakers residing within that given community, thus not respecting the linguistic rights of the users of any given natural sign language. They have gone so far as to create a phonemic-based system of communication which makes traditionally spoken languages accessible by using a small number of 13 (representing consonants) in different locations near the mouth (representing ), as a supplement to lipreading. This is known as the Cued Speech System and will be further explained in the next paragraph.

12 For further information regarding the origins of human communication, readers can refer to Tomasello (2008). 13 For TSL handshapes, see Fig1. 33

Figure 1 TSL handshapes (Smith and Ting 1979)14

14 A more complete and detaile figure is in Appendix III, taken from Chang, Su and Tai (2005: 260-263).

34 2.5 Cued speech

Scholars and people in favor of cued speech argue that it is merely a system to enhance the reading comprehension and reading ability of Deaf children. It is a method of making spoken sounds visible. The hypothesis underlying this system is that if all phonemes looked different on the lips of the speaker, in other words if lip reading was easier than it actually is, language acquisition would be unaltered for the Deaf. They would learn it through the vision channel instead of the auditive one.

The problem is that there are many sounds, which cannot be distinguished only by lip-reading, like /b/ and /p/ or /d/ and /t/. Therefore, in 1966 Dr. R. Orin Cornett at Gallaudet College, Washington D.C. invented this system, called cued speech.

Cued speech is neither a sign language nor a manually coded language. It is simply a manual modality of communication for representing English, or any other language for that matter, at the phonological level. Although originally designed for the , it was adapted to the a year later, and in time it has also been adapted to tonal languages like Chinese and Thai, where the is indicated by inclination and of the hand. Tammasaeng (1986) investigated the effect of cued speech on the tonal perception of the for Deaf children in Thailand. His study shows that cued speech can clarify the tonal characteristics of such a language.

Fig. 2 is a picture from the University of Florida representing the Cued Speech system used in ASL.

35

Fig. 2

Image from the University of Florida

Basically, to cue a word, the cuer should do the following:

(a) Divide word into sequence of Consonant- (CV) pairs.

(b) For each CV pair, place the consonant at the vowel placement

while the corresponding consonant-vowel pair. Note

that:

o Consonants with no following vowel are cued at the side placement.

o Vowels with no preceding consonant are cued with handshape 5.

Other rules to keep in mind:

(a) At the side placement:

o /oe/ and /ah/ require a 1-inch forward movement.

o /uh/ requires a downward movement (between 1/2" and 3/4").

36 (b) Consecutive identical cues require that the placement be touched twice (at

the side, a "flick" is used to represent two touches).

It seems opportune to distinguish clearly between cued system and sign languages. The latter have their syntax, phonetic and lexical patterns which Cued speech does not try to substitute. All in all it seems that phonological awareness can improve reading ability, as shown by Ostrander (1998). Research has consistently shown a link between lack of phonological awareness and reading disorders (Jenkins and Bowen 1994) and discusses the research basis for teaching Cued Speech as an aid to phonological awareness and .

To sum up, cued speech is a phonemic-based system of communication which makes traditionally spoken languages accessible by using a small number of handshapes (representing consonants) in different locations near the mouth

(representing vowels), as a supplement to lipreading and is fundamentally different from manually coded languages which, on the other hand, try to substitute natural sign language. This issue will be developed in the next paragraph.

37

2.6 Manually coded language

Manually Coded Languages (MCLs) can also be referred to as Signed Oral

Languages (SOLs). They are an attempt to transpose the syntax and morphological structures of oral languages into a gestural-visual form. By many people it is perceived as a surrogate form of signed languages. In other words, the desirable alternative to a seemingly a-grammatical and a-syntactic natural signed language.

Sign languages actually have their spatial structures, with its own specific syntactic structure. Unlike them, MCLs have not evolved naturally, they are the invention of hearing speakers and this is reflected in their grammar, which closely follows that of oral languages, in their written form. In the past, it has been detrimentally used in the education of the Deaf and also by some early sign language interpreters who believed in the superiority, as it were, of oral languages. Back then, sign languages had not acquired their own dignity as natural languages yet.

Sometimes, MCLs are also referred to as Grammatical Sign Languages (GSLs) as is the case in Taiwan, where it is known as wenfa shouyu (文法手語), although the proper term would be Signed Chinese (Myers and Tai 2005).

MCLs have been opposed by many people, namely by the so-called oralists and by those who fought to see sign languages rights recognized. The first category, namely oralists, is made up of people who believe that the education of deaf students should be carried out through oral language with the help of lip reading techniques, speech, and mimicking the mouth shapes, as in cued speech. The oralists have been very active since Épée's time. The second category is made up of all those, scholars, researchers, interpreters, who have been fighting for sign languages to enjoy a similar status to oral languages, and to share the same linguistic dignity.

As a matter of fact, instructors often use MCLs incompletely and inconsistently

38 in classrooms, and as a result, the student will suffer from an incorrect use of written

English, or Chinese for that matter, and also incapacity to sign his or her own native sign language properly (Kluwin 1981; Marmor and Pettito 1979).

Following are some examples of incorrect Chinese, at point not even understandable, found in the writing of a native Taiwanese signer who has been raised by his instructors with the use of wenfa shouyu. We can say that it belongs to wenfa shouyu because the structure is not natural in Chinese but it does not even belong to Taiwan Sign Language.

超級愈來愈好玩一次,藍天感覺好舒適,超級非常游泳池的瘋狂。今天冰淇

淋盤非常好香好吃很棒.., 為什麼看的時候很年輕,當然要去地方景風更好美,

認識介紹對我這麼好,幽默的聊天交流,蓋亂還不錯的感覺,請辛苦帶我真的感

謝其他,要離開回台灣,只有韓國留的回憶,希望自己非常快樂! 我也不要看

愛中,快看很昏倒。

One point should be very clear. MCLs are not auxiliary sign languages, i.e. complete representations of an oral language; they are merely used to represent the written form of the language.

Going back in time, the first to develop a gestural-visual system to represent the written form of a written language was the Freanch Abbé de l'Épée, in the 18th century. It goes without saying that the Deaf community back then had their own natural sign language, which in linguistics is known as Old ; however for Épée, it was a primitive, not fully fledged system of communication. He wanted to teach them the noble concepts of religion and philosophy, and he thought the best way to do that was to convey those concepts, pertaining to the written language, through a gestural-visual channel, so that they could all understand. He 39 called these signs, signes méthodiques, translated as Methodical Signs.

MCLs paved the way for the proliferation of other signed oral languages later on.

The degrading thing is that even interpreters used these unnatural forms of sign language in the 18th, 19th and even part of the 20th century to communicate with the

Deaf community (Woodward and Allen 1988).

In conclusion, MCLs, also known as SOLs, attempt at representing, word-for-word, the written form of any given oral language, thus they need to develop, and create, an enormous amount of new signs, especially for all those grammatical signifiers that would be rendered differently in natural sign languages.

Most of the times, the core lexicon is taken from already existing signs, and then ad hoc grammatical signs are added for words and word endings that don't exist in the natural sign language.

Lacking the naturality of any sign language, MCLs require special ability in finger spelling and lip reading, which is also a complement of the newly-created lexicon and which will be further explored in the next paragraph.

40 2.7 Lip reading

Lip reading, also known as speech reading, is an acquired skill which enables Deaf people, with hearing aids, to enhance their understanding of what is being said by reading the lips, making out the phonemes uttered according to the position of the lips, face and tongue of the interlocutor.

In everyday communication, lip reading is a very common, and often subconscious, phenomenon. Whenever our communication seems obstructed by any outside factor, lip reading can enhance language comprehension. This was analyzed by McGurk and MacDonald (1976) in their groundbreaking article: “Hearing Lips and Seeing Voices”. They discovered the so-called McGurk effect by accident when

McGurk and his research assistant, MacDonald, asked a technician to dub a video with a different phoneme from the one spoken while conducting a study on how infants perceive language at different developmental stages. When the video was played back, both researchers heard a third phoneme rather than the one spoken or mouthed in the video. Basically, the McGurk effect is a perceptual phenomenon that demonstrates an interaction between hearing and vision in speech perception. The illusion occurs when the auditory component of one sound is paired with the visual component of another sound, leading to the perception of a third sound (Nath and

Beauchamp 2011).

Unfortunately, though, lip reading is not the answer to all problems, because most languages do not pronounce all phonemes of a word, let alone in a language like Chinese which is very rich in homophones. According to some Deaf people, whose native language is TSL, that I interviewed, lip reading is not a very efficient information-retrieval mechanism, because in Chinese it is extremely difficult to retrieve every syllable uttered in a sentence just by looking at the interlocutor’s lips. 41

Chinese has thirteen vowels like ㄜㄛㄢㄤㄥㄣ etc., twenty-one vowels like ㄅㄆㄇ

ㄈㄉㄊㄋㄌ etc. and three and a half semi-vowels ㄧㄨㄩ. Furthermore, Chinese is a tonal language, which means that every word varies in its meaning according to the tone used. Without any hearing aid, lip reading is practically impossible in

Chinese, because the tones cannot be conveyed in any way, plus many similar-looking consonants, like all the alveolar ones. Other difficulties include the fact that not everyone positions the lips equally when speaking, not to mention speech velocity. According to my interviewees, most people are not inclined to repeat what they’ve already said, thus losing information and leading to misinterpretations of the message.

An example in the English language is given by author Author Henry Kisor, who titled his book What's That Pig Outdoors?: A Memoir of Deafness in reference to mishearing the question, "What's that big loud noise?" He used this example in the book to discuss the shortcomings of lip reading.

Lip reading may be more efficient if combined with the previously mentioned cued speech; one of the arguments in favor of the use of cued speech is that it helps develop lip reading skills that may be useful even when cues are absent, i.e., when communicating with non-deaf, non-hard of hearing people.

Lip reading has been largely used in the oralist school, i.e. in the education of deaf students through oral language, which is a phenomenon known as oralism, and which has been the major cause of the slow recognition of natural sign languages within Deaf communities around the world and the reason underlying the slow

“evolution” of natural sign languages towards linguistic dignity.

Indeed, the education of the deaf has always consisted of two main approaches: i.e. manualism and oralism. The first one is the education of Deaf students using their native sign language and oralism is the education of deaf students using oral

42 language.

Since the beginning of the 18th century, these two philosophies have been on opposing sides of a heated debate that continues to this day, although many modern deaf educational facilities attempt to integrate both approaches.

In general, oralism has traditionally been perceived, erroneously, as the factor which could have helped Deaf people move along the evolution ladder, from their hands up to their mouth. Given the importance of this pedagogic method, and of the negative consequences it has had on the mental and linguistic development of many

Deaf people, it seems opportune to further analyze this teaching approach in the next paragraph.

43

2.8 Oralism

As previously mentioned, oralism is the education of deaf students through oral language by using lip reading, speech, and mimicking the mouth shapes and breathing patterns of speech instead of using sign language within the classroom.

In the course of history, many people have opposed the oralist tradition because it was perceived as the dark ages for (Winefield 1987).

Leaders of the manualist movement, including Edward M. Gallaudet, argued against the teaching of oralism because it restricted the ability of deaf students to communicate in what was considered their native language, namely sign language.

Apart from the personal beliefs of scholars and people supporting the manualist or oralist school, it seems opportune to notice that the method of communication used can affect the social and educational aspects of a deaf individual's life

(Schirmer 2000 ). Therefore, it is a choice which will inevitably influence the Deaf person’s life, thus it has to be taken seriously.

The oralist method bans any residual form of sign language or finger spelling; it is all based on facial expressions, body language, residual (remaining) hearing, speech reading, and speech to communicate, which means that it will take a considerably long time for the Deaf person to master all these techniques.

The Oralist school seems to be connected to the Aristotle philosophy according to which deaf individuals could not reason because they could not talk. It would seem as if the ability to talk would elevate Deaf people in the evolution ladder.

Oralism became widely used in the mid-1500 when a Spanish monk, Ponce de Leon, began to educate Deaf students through Oral communication for religious purposes.

Supporters of oralism believe it is the only way for Deaf people to be fully integrated within the society and the education system they live in. In other words,

44 oralists choose to select a communication option that will allow Deaf individuals to excel in education and conform to the societal norm.

According to some researchers, like Schwartz (1996) and Stone (1997), the oral communication method provides benefits both in education and assimilation and, because of their fundamental values; oralists declare it is thus the most beneficial communication method for Deaf individuals.

On the other hand, those who support sign language believe that the feeling of belonging to a culture is very important from a socio-emotional point of view. They also claim that it is a way to exercise human rights, insofar as sign language is perceived as the native language of the Deaf community, thus they have the right to speak their own language.

We believe that in medio stat virtus, i.e. Deaf people have the sacred right to use their own native language, just like any other person on earth. However, we believe it is also important for Deaf signers to grow accustomed to techniques such as lip reading, to feel more integrated within the society, because not always will they encounter people who can sign or will be able to benefit from the use of an interpreter.

The most crucial issue is that Deaf people see the right to freely use their own native language respected. The same debate goes on within sign language itself, namely between the difference of grammar sign language vs. natural sign language, which is the last step towards the full acceptance and recognition of natural sign languages and their linguistic dignity.

45

2.9 Signed Chinese vs. natural sign language

In a recent international handbook published by Pfau et al. (2011), there is an interesting historical perspective on sign language linguistics. According to the handbook, research in the past 50 years has proven the fact that sign languages are indeed independent natural languages with well-formed and complex grammatical systems, no less compact and developed than those of spoken languages. Simply put, natural languages exist in two distinct modalities, the visual-manual modality of sign languages and the auditory-oral modality of spoken languages.

From a diachronic point of view, the history of linguistic research can be divided into three different periods. In the first, researchers focused on the underlying identity between spoken and signed languages. Determined and undeterred to prove the linguistic status of sign languages against what most people believed to be only rough and underdeveloped pantomime and gestures; early sign linguists de-emphasized the role of iconicity in sign languages. The sign language most investigated in this period was ASL. As a consequence, there was little typological research.

The second phase started in the 1980s. Linguists and researchers started investigating the issue of modality along with similarities and differences between sign(ed) and spoken languages. Their aim was mainly to analyze the influence of modality on linguistic structure, in modality-specific properties of signed and spoken languages, and in modality-independent linguistic universals as well as psycho- and neuro-linguistic processes and representations. Starting from the observation that sign languages seem to be typologically more homogeneous than spoken languages, many grammatical properties of sign languages were related to specific properties of the visual-manual modality.

46 However, in the first two phases, i.e. the early and modern periods, research was mainly focused on the comparison of sign languages and spoken languages, whilst cross-linguistic studies on sign languages were quite rare.

Only once non-Western sign languages began to be studied, did it become clear that sign languages show more variation than originally predicted. This third phase, which started in the late 1990s, is known as the post-modern period, approached sign language typology more seriously. Today, we can observe an increasing interest in comparative and experimental studies on sign languages at all linguistic levels, and of less studied (Western and non-Western) sign languages, such as the one which is the object of our study, namely TSL (Taiwan Sign Language).

TSL, just like any other sign language, can be divided into manual sign language, a.k.a manually coded language (signed Chinese), and natural sign language. While manual sign language can be defined quite accurately, with several renowned major approaches, it seems more problematic to define with the same accuracy the properties of natural sign languages.

Manual sign languages are representations of spoken languages in a gestural-visual form. In other words, they can be defined as a “sign language” version of spoken languages. These languages are not natural, insofar as they were invented by hearing people and strictly follow the grammar of the written form of oral languages. They have not evolved naturally in Deaf communities. In the past, manual sign languages have been mainly used in Deaf education, thus causing a major trauma in the development of deaf children’s native language, and by past generations’ sign language interpreters. It goes beyond the purpose of this thesis to provide a historical excursus of the genesis and development of manually coded languages as well as delving into the controversial issues between the French oralist school, back in Épée's time, and their controversies with the manually coded 47 language system. Suffice it to say that the emerging recognition of sign languages in recent times has curbed the growth of manually coded languages, and in many places, interpreting and educational services now favor the use of the natural sign languages of the Deaf community. As far as TSL is concerned, the situation is slightly different for historic and geographic reasons.

Languages, both signed and oral, are alive, thus they change in time. As previously mentioned, TSL is closer to JSL (Japanese Sign Language) than to CSL

(Chinese Sign Language) for historical reasons. When Taiwan was occupied by the

Japanese, they brought along their language, both oral and signed.

In Taiwan Mandarin Chinese, especially in the elderly generations, these traits are still present. The same happened with signed languages. However, later on, in

1973 according to sign language local interpreter B (personal communication, 2012), the national government wanted to unify all the different types of signed languages present on the island of Taiwan. Back then, the two most renowned schools were in

Taipei and Tainan. Officials and linguists decided to preserve characteristics from both schools and to fuse them together with manually coded language traits. In other words, the two systems, natural vs. manual sign language, started melting together after 1973, so that it seems hard if not impossible to separate one from the other in a linguistically purist way.

According to informant C (personal communication, 2012) this fusion is irretrievable to the extent that some first generation interpreters are “too” natural, i.e. use a sign language that young Deaf generations are not familiar with anymore, as proven by our minor inter-generational experiment, previously reported.

48 2.10 Concluding remarks

According to the communicative principle, the important thing is that the message comes across to the Deaf interlocutor/evaluator flawlessly, irrespective of whether during an exam a candidate uses only natural signs or a mixture of natural and manual, which is also the way interpreters cope with lexical variations.

If one had to find some factors which altogether could define the main properties of natural sign language as opposed to manual, I would say that the following ones could be taken into consideration: word order, linear structure vs. simultaneous structure, and prominence of facial expressions. The list is not meant to be exhaustive. The aforementioned factors can be defined as the conditio sine qua non for the “naturality of sign”.

In linguistic typology, word order defines the sentence structure. If the language is S-V-O, it means that the subject comes first, the verb second, and the object third.

Languages may be classified according to the dominant sequence of these elements.

Though important and classifying, it does not seem to be the most important factor in the definition of natural Taiwan sign language, insofar as it seems to be more flexible and elastic according to which element of the sentence is emphasized by the signer and also because where there is agreement, word order does not seem to be that rigid. On the other hand, the linear vs. simultaneous structure seems to be much more defining. Natural sign languages, including TSL, make ample use of the space around their body and surrounding their hands, especially for locative verbs and comparison structures. For instance, if one signs the sentence “the cat eats the mouse” word by word (sign by sign), it is an obvious manual representation of the concept to be conveyed because in natural sign language both elements (subject and object) would be signed in different collocations in the space surrounding the signer, 49 and the verb would proceed from the agent towards the patient in a “3D pattern”.

This seems to be quite natural because from a cognitive point of view, grammar may be defined as the attempt to describe the interaction of the participants, so it seems rational to first describe who the participants are, so that the verb happens to be in the last place.

The third factor is facial expressions, which, as previously mentioned, convey the grammatical parts of speech. All these elements, which are to be found in every natural sign language, will be resumed in the fifth chapter, namely the one dedicated to assessment and evaluation in TSL interpreting.

All the afore-mentioned discussions are related to my topic because some signers have different linguistic background and interpreters need to know the background knowledge in order to have good communication with the signers with different linguistic backgrounds.

The issues herein put forth will be further emphasized in the TSL interpreting evaluation chapter by underlining the fact that sometimes the text that is used during the certification exams is Signed Chinese and not Natural Sign Language. Therefore, they complicate the TSL interpreting evaluation process (Tai, personal communication, 2012).

This chapter was an introduction to Taiwan Sign Language (TSL) which had to be duly mentioned before talking about TSL interpreting issues. The next chapter will introduce TSL interpreting and related issues.

50 CHAPTER THREE TSL INTERPRETING

3.1 Introduction

In the last chapter we focused on Taiwan Sign Language (TSL) which had to be duly mentioned before talking about TSL interpreting issues.

In this chapter, we are going to focus more closely on TSL interpreting and some of the related issues, like the dignity of TSL interpreters and later on in the thesis we will also focus on the issue of interpreting quality itself. We will start by providing some information on TSL interpreting and its history in Taiwan.

3.2 TSL interpreting history

The main task of sign language interpreters is to help Deaf people communicate with hearing people and vice versa. These interpreters intercept telephone calls and other methods of communication, translating (interpreting) spoken language into sign language or into a written speech.

However, sign language interpreting is a relatively new phenomenon, as sign languages have gone through a process of standardization only recently and all different types of technology for Deaf communication have evolved gradually, yet exponentially.

In Italy and France, standardized sign language developed as early as the 18th and 19th centuries. In America, a standardized sign language emerged when French signer brought to the United States in the

51

1800s. This language evolved from French Sign Language to become a different, independent, new signed language, which is today known as American Sign

Language (ASL) (Gallaudet, 1888). These processes of sign language standardization paved the road for future interpreters.

In Taiwan, this process of standardization began in 1895-1945, when the first schools for the Deaf were founded, during the Japanese rule. Before that, there must have been a local variety of TSL that indigenous Deaf people used back then, but unfortunately not much is known of the pre-Japanese occupation period.

Some of the first sign language interpreters were operators at telecommunications relay services, because they were the ones who prompted the required technologies. They intercepted phone calls and read the messages that the

Deaf typed to their hearing friends on text telephones (TTY) or telecommunications devices for the Deaf (TTD). These rudimentary equipments were telephones for the deaf, invented in the 1960s, that included keyboards, allowing Deaf individuals to type messages and send them over telephone wires. Early interpreters in the late

1900s helped hearing people who did not own TTYs or TTDs by translating these typed messages into spoken language, and vice versa.

With the advent of the second millennium, in 2002, the first national for the Deaf was launched. This service allowed Deaf individuals to sign to an interpreter using a web camera, a huge improvement on previous TTYs and

TTDs as video relay allowed Deaf people to use ASL to communicate. The interpreter then translated the ASL into English for the hearing person on the other end of the call. Interpreters fluent in Spanish were also able to translate from ASL to

Spanish, which is called cross-interpreting.

An important aspect of sign language interpreting has always been the work environment. Hence, recently researchers have been studying the work environment

52 of sign language interpreters to improve it.

In 2008, a study by the Rochester Institute of Technology (RIT) revealed that sign language interpreting causes more physical stress than assembly line work, such as carpal tunnel syndrome and tendinitis. When interpreters become mentally stressed, the risk of injury increase as wrist movements increase in acceleration and velocity by 15 to 19 per cent.

The afore-mentioned reasons contribute to the thesis herein exposed, that is to say the efforts (both physical and psychological) of sign language interpreters is by no way inferior to oral interpreters and this should be recognized by the competent authorities regulating their work conditions.

As interpreters are needed to keep a connection with the Deaf community, RIT professor and researcher Matthew Marshall wanted this research to enhance the ergonomics of sign language interpreting to keep interpreters working without injury.

Nowadays, sign language interpreters are often freelancers who work part-time.

In the United States, interpreters may gain certification from the National

Association of the Deaf and the Registry of Interpreters for the Deaf. As video relay services become increasingly popular, the demand for ASL interpreters should increase even more, according to the Bureau of Labor Statistics.

In Taiwan, sign language interpreters may gain certification from ad hoc bodies which are supervised by the Chinese National Association of the Deaf (中華民國聽

障人協會) .

According to a professional interpreter (informant D, personal communication,

2012), in Taiwan there is no professional school or training institute, let alone any department affiliated with a university, where TSL interpreting is officially taught, the only courses offered are presently set up and organized by the Taipei City 53

Bureau of Labor Affairs. According to the same source (Nieh, personal communication, 2012), the first Taiwan Sign Language Interpreting Certificate

Exam was organized by the Social Affairs Bureau, in the Department of Social

Welfare, of the Taipei City Government.

At present, these certifications are organized by the same body which regulates

TSL courses offered in Taiwan, i.e. the afore-mentioned Bureau of Labor Affairs.

In the last ten years, apart from offering some course in TSL, linguists and sign language interpreters have been profusing their efforts in compiling TSL interpreting training material.

The ultimate goal was to train to-be interpreters in learning as many signs as possible; however, at first, training materials were mainly a collection of signs compiled for the reader to learn and remember.

The Bureau of Labor Affairs, which is the entity regulating and compiling these materials, mainly wanted to help the Deaf in their career by proving an opportunity to have interpreters available. Therefore, together with the help of specialists, linguists and sign language interpreters, the first volume was compiled in 2001. This first training book was entitled the “Taipei City Sign Language Interpreting Training

Material (臺北市手語翻譯培訓教材, Taibei shi shouyu fanyi peixun jiaocai) –

First Volume”. The Council hoped that in time the stress that overwhelmed the Deaf community when it came to communication and the obstacles they faced could be relieved thanks to the formation of professional figures: TSL interpreters.

These training materials evolved, becoming more and more complete and accurate. Thanks to years of teaching and experience, more books were compiled including signs etymology, the history of the development of TSL, issues regarding the culture of the Deaf community and their language, detailed explanations on TSL grammar, important points in translation, and some parallels (both cultural and

54 linguistic) between oral languages and signed languages.

At the same time, in order to increase the efficiency and expediency of sign language hands position and movement learning, the Bureau compiled the first revised edition of the training material in 1996. Thanks to digital technology, the book was accompanied by modern digital instruments, like CD-ROMs and other equipments which could enhance the learning process. Translations of brief speeches or longer paragraphs were also included as a way to illustrate grammar more easily.

Basically, the first courses in interpreting were provided at the beginning of the

1990s (Smith 2005). “A concentrated effort was undertaken by a number of different groups to provide training to those individuals who desired to become sign language interpreters, for which there was and still is a pressing need” (Smith 2005: 14).

Back then, the National Association of the Deaf of the Republic of China (中華

民國聾人協會 1994) compiled the first book of student reflections at the end of one semester of interpreting training. The opening words of the book, titled “Rescue our

Mother Language – Natural Sign Language (自然手語), were written by Ku Yushan, the then president of the association. Ku wanted Taiwan Natural Sign Language, i.e. the language naturally spoken by the Deaf community in Taiwan to be recognized as an official minority language, just like the other many aboriginal languages that are considered minority languages in Taiwan, though they have their own status and linguistic dignity. According to Smith (2005), his main critic towards the publications that thus far had been carried out by the government of Taiwan was the fact that they were compiled by hearers, that is to say by non-native users of TSL.

Later on, in 1997 and 1999 four more books were published by Yuping Chao

(Chao 1997 a, b, c; 1999) and they were the result of the joint efforts of a team guided by a Deaf individual, Chao himself. The book was divided into four volumes.

Chao has also worked for the government, with hearing people in the Ministry of 55

Education to aid in the development of signs for instructional purposes and has also served as president of the National Association of the Deaf of the Republic of China.

These four volumes are interesting because they show an evolution of the teaching methods in sign language pedagogy. The first three are basically an accumulation of signs presented in line drawings and accompanied by Chinese and

English translations, description of the signs, cultural and grammatical information, sign language equivalents of Chinese idioms, and so forth (Smith 2005). The fourth one is a little more peculiar because the signs are presented with actual photographs of the signers. Furthermore, the same volume is also enriched by some paragraphs on the history of sign language in general, the history of TSL, TSL compound signs,

TSL sentence structure, name signs and . Chao also compiled other books in 1999 and attempted at compiling the first sign language book for children, by presenting basic signs accompanied by cartoon drawings which might be suggestive of the meaning of each sign.

As previously mentioned, starting in those years, the official courses in TSL interpreting or, generally speaking, TSL interpreter training programs, were sponsored by the Taipei City Government’s Bureau of Labor Affairs. The people who organized these courses, along with external coordinators like Wayne Smith did in 1998 (Smith 2005), developed a series of lessons that were later published in two volumes by the Bureau of Labor Affairs of the Taipei City Government under the title 手語翻譯培訓教材, Shouyu Fanyi Peixun Jiaocai in 2001 and 2002. The

CD-ROMs that came along not only presented the signs and conversations but also interpreting exercises, namely practice in sign-to-voice and voice-to-sign interpreting.

In the last few years, the courses, which are still offered by the Bureau of

Foreign Affairs have been multiplying given the increased need for professional

56 interpreter both in the private market and in the sector of volunteers, as we will mention later in this chapter.

On a final note, training schools are also a crucial in raising to-be interpreters’ awareness on the sign language interpreting code of ethics, which all interpreters should abide by. According to the Registry of Interpreters for the Deaf stipulates that all sign language interpreters should abide by the following seven tenets:

(a) Interpreters adhere to standards of confidential communication.

(b) Interpreters possess the professional skills and knowledge required for the

specific interpreting situation.

(c) Interpreters conduct themselves in a manner appropriate to the specific

interpreting situation.

(d) Interpreters demonstrate respect for consumers.

(e) Interpreters demonstrate respect for colleagues, interns, and students of the

profession.

(f) Interpreters maintain ethical business practices.

(g) Interpreters engage in professional development.

However, given the relatively recent development of TSL interpreting training and teaching in Taiwan, most courses only offer a linguistic preparation without properly delving into ethical and best-practices related matters.

In summary, we can see that the history of TSL interpreting teaching is quite recent in Taiwan, as the first books were compiled along with the first courses which have been offered mainly in the last decade. Before that, people used to learn TSL with their siblings, if they happened to live in a Deaf environment, or with friends and would later increase their knowledge of sign language and their skills on the

57 field, i.e. while working.

However, it seems opportune to notice that in spite of the admirable efforts by the Bureau of Labor Affairs to popularize TSL and TSL interpreting courses and offer classes on a broad-scale, there has not been a professional recognition which could go beyond the certification and root its essence in human consciences and institutions’ regulations.

Indeed, according to some interviews and surveys that I carried out, most of the professional interpreters agreed on the fact that their pay did not seem to be “up to the job” or, I would like to add, on the par with their fellow colleague oral interpreters.

We will focus on these and other related aspects in the next section.

58 3.3 Status quo of TSL interpreters

A corpus of TSL interpreters was surveyed to ensure whether the precarious and unprofessional conditions dictated by the government are indeed so. Under the hypothesis that indeed they are so, the rest of the research is fully aimed at proving my thesis, i.e. bimodal interpreters should share the same professional dignity as oral interpreters. In spite of the fact that TSL is indeed a language and that the neurobiological efforts required to carry out the interpreting task, both oral and signed, are the same, the realistic picture seems to be quite different.

As previously mentioned, after reading a table issued by The Labor Affairs

Department of the New Taipei City government which regulates sign language interpreting services15, I decided to interview some of my fellow sign language interpreters, to inquire on the reality of the market and on their status quo. For someone with a background in simultaneous conference (oral) interpreting, I encountered a very different picture from the one I was used to in the oral interpreting world/market.

Some of the people interviewed prefer to remain anonymous. The only one who bravely spoke up, because she feels “there is nothing to be ashamed of” is professional TSL interpreter A. According to her account, sign language interpreters, unlike oral interpreters (I add) are paid by the hour and not per working day. She says that every sign language interpreter approximately receives, at the very most,

1600 NTD per hour, which is line with the data presented in Table 1. In some rare occasions, sign language interpreters are not paid by hour, according to the importance or urgency of the event. For instance, in the interview, she said that once

15 The reader can refer it to it on page 2. 59 she was paid 5000 NTD for a whole session (two hours) because the event was considered of the utmost importance, otherwise interpreters are usually paid hourly, and most of the times only a meager 1000 NTD per hour (interpreter A, personal communication, 2012). TSL interpreters are not paid properly or at least not in the same way in every city according to the local regulations and the local budget restrictions.

This was very surprising to me, as a conference oral interpreter, considering the fact that according to the official website of WASLI (World Association of Sign

Language Interpreters), the International Association of Conference Interpreters

(AIIC) has decided, by an overwhelming majority at the AIIC general assembly held in Buenos Aires in 2012, to open its doors to sign language conference interpreters, as a result of the close cooperation and fruitful discussions between AIIC and the

World Association and the European Forum of Sign Language Interpreters.

As previously mentioned, AIIC represents more than 3000 conference interpreters worldwide. On the other hand, WASLI and EFSLI promote the professional interests of sign language interpreters. The three associations share professional concerns such as ethics, advocacy, working conditions, recognition, training and professional development. The main goal is to put sign languages on an equal footing with oral languages within the world of conference interpreting, including working hours, working conditions and retribution, which apparently is not respected yet in the world of TSL interpreting (Taiwan).

In conclusion, the status quo of TSL interpreters can be summed up in two words, professional volunteers. Volunteers underline the almost gratuitous nature of their work, because their retribution does not seem to be enough compared to their fellow oral interpreters. At times, though, as it is emphasized in the next section, professional sign language interpreters really work for free, in many instances. In

60 this case, the definition of “professional volunteers” is intended literally.

3.4 Professional volunteers

The title of this section is a reflection of the way professional TSL interpreters are treated.

In the past few months, I have interviewed several professional interpreters, and they all told me the same things, which is a further proof of how wide-spread this phenomenon is. As previously mentioned, in some cities like Chiayi and Taidung there is only one or two financed (licensed) interpreter, all the others orbit around this person. In other words, there should be a radical change at the level of the system. There are also other people who interpret without having a license, for various reasons, for example they grew up in a deaf family. It will be even harder for those with a license to find a job as interpreters under these circumstances. The only licensed interpreter in Taidung originally comes from Kaohsiung (Chang, personal communication, 2012).

Two accounts were particularly meaningful because they were on the same note and also because they reflected the lack of proper subsidies for sign language interpreters.

Professional interpreter Emma Nieh (personal communication, 2012) told me that nowadays sign language interpreters acting as volunteers are extremely common, too common. They are to be found in many different instances. Even within the

National Association of the Deaf there are volunteering teams. The problem does not lie within these teams; on the contrary they do a commendable and praiseworthy job.

The problem lies within the lack of adequate financial subsidies on the part of the government. The Deaf cannot necessarily afford an interpreter themselves and most of the times the financial support they receive is not enough. 61

Another important issue, as confirmed by Ginger Hsu (personal communication,

2012) is the fact that there is no subsidy the Deaf can apply for when it comes to personal and private matters, so in these situations the Deaf cannot but recur to volunteers, thus indirectly declassifying and decreasing the professional status and dignity of the whole category. At times, being a “volunteer” is also necessary to maintain work relationship, considering the fact that most of the times a Deaf person likes to “recruit” the same person with which a certain form of trust and confidence has been established. The publich finances vary from city to city. In Chiayi for example (Chang, personal communication, 2012), the local governmental institution officially pays only one licensed interpreter, who has a team of people who collaborate with him, the situation in Taipei city, is slightly different because it is the capital of Taiwan, therefore being a metropolis has more needs for a greater number of interpreters. Most private matters are to be handled directly between the Deaf person, who becomes the client, and the interpreter. Private matters include domestic controversies, acquiring things, handling bureaucratic procedures at the bank, funerals, weddings and so on.

As professional interpreter Ginger Hsu (personal communication, 2012) points out “the service provided by the government is not holistic. There are even some private schools or classes for which the student cannot apply for an interpreter”.

Generally speaking, most associations have teams of volunteering sign language interpreters which provide their service for those people who are not entitled to apply for an interpreter through governmental financial aids.

These people go to the afore-mentioned associations and inquire on the possibility of having this service, or at times the Deaf simply recur to personal friends who might be professional interpreters or simply hearing people who can sign. This could have negative repercussions because just like a person with two

62 hands is not necessarily a pianist, a hearing person who can sign is not necessarily able to interpret, to unveil all the nuances which are to be found in the original signed speech or which should be rendered in the target oral speech.

3.5 Conclusion

Sign language interpreters, just as their oral colleagues, are professional figures which cannot improvise their job. Interpreters, irrespective of the modality, can carry out their task only after a long period of training where they learn all the necessary skills and techniques they will need in their profession.

Another important aspect of this process is to raise to-be interpreters’ awareness about the issue of quality in interpreting. Translating, or interpreting for that matter, means providing a service for a client, or in general for a person who needs it. This service, according to the code of ethics should be as accurate and “spotless” as possible, which is also a way of showing respect to the client. Interpreters should indeed possess the professional skills and knowledge required for the specific interpreting situation.

According to one of the guiding principles of the Registry of Interpreters for the

Deaf (RID), interpreters are expected to stay abreast of evolving language use and trends in the profession of interpreting as well as in the Deaf community culture.

Interpreters accept assignments using discretion with regard to skill, communication mode, setting, and consumer needs and they also possess knowledge of Deaf culture and deafness-related resources. Consumer needs and consumer expectations are two important aspects of the issue of quality and interpreting evaluation because they are the targets and the beneficiaries of the service; therefore, their judgment is of the utmost importance. 63

In order to guarantee the quality of their performance, interpreters should always assess consumer needs and the interpreting situation before and during the assignment and make adjustments as needed. Moreover, they should render the message faithfully by conveying the content and spirit of what is being communicated, by using language most readily understood by consumers, and correcting errors discreetly and expeditiously. In case of difficulties, they should request support (e.g., certified deaf interpreters, team members, language facilitators) when needed to fully convey the message or to address exceptional communication challenges (e.g. cognitive disabilities, foreign sign language, emerging language ability, or lack of formal instruction or language). In order to provide a neutral service, interpreters should refrain from providing counsel, advice, or personal opinions. In other words, they should judiciously provide information or referral regarding available interpreting or community resources without infringing upon consumers’ rights.

The issue of quality and evaluation in interpreting is crucial in this profession and it deserves to be further explored, hence it will be treated separately and in exhaustive detail in the sixth chapter, where we will focus on investigating assessment and evaluation parameters in Taiwan Sign Language Interpreting (TSLI), with an emphasis on the naturality issue.

64 CHAPTER FOUR Challenging areas in TSL Interpreting

4.1 Introduction

In the course of the present thesis, we focus on several aspects linked to TSL interpreting, such as the issue of training and quality, the neurolinguistcs studies which have proven not only the fact that sign language is indeed a language but also the fact that sign language interpreting is no easier than its oral counterpart. We have also modified an experiment previously carried out by Gile (1989) and adapted it to our needs. The findings of the study which will be illustrated in the next chapter also strengthen the Effort Models’ “tightrope hypothesis” that many e/o’s are due not to the intrinsic difficulty of the corresponding source-speech segments, but to the interpreters working close to processing capacity saturation which in Gile’s (1989) words “makes them vulnerable to even small variations in the available processing capacity for each interpreting component”. Also, another interesting aspect which emerged from the afore-mentioned study is the higher difficulty to render certain expressions in sign language in the simultaneous mode because of the intrinsic explanatory need.

These expressions are a part of the challenging areas of in the interpreting task, of which probably the hardest to convey in a target culture and language of the Deaf is probably figurative speech and metaphors.

Metaphors are most probably the most extensively used rhetorical device in everyday language, both at a conscious and at a subconscious level. The analysis of metaphors sheds light on many characteristics of a given culture, because rhetorical

65 figures shape the way we perceive the world and, in turn, are molded by the world around us.

Therefore, every language has a different set of metaphorical expressions and speakers may be more or less aware of them. However, notwithstanding the peculiar specificities within each given culture and language, some researchers have proposed the universality of metaphors as a mental process.

According to Lakoff and Johnson (1980:3), “metaphor is pervasive in everyday life, not just in language but in thought and action. Our ordinary conceptual system is fundamentally metaphorical in nature. […] Our concepts structure what we perceive, how we get around in the world, and how we relate to other people. Our conceptual system thus plays a central role in defining our everyday realities” by way of using metaphors amongst other rhetorical device.

Therefore, signed languages, given their full and unanimously recognized status of languages, also make ample use of metaphors. There have been quite a few studies, which will be exhaustively reviewed in the present paper, on the use of metaphor in American Sign Language (ASL), however, to the author’s best knowledge, no paper has been published on the same topic in Taiwan Sign Language

(TSL), although some researches on TSL iconicity have been published.

Hence, this chapter aims at filling a gap in the literature, namely the analysis of metaphors in TSL, by providing real examples to support the analysis herein carried out. The issue of metaphors is relevant from a postmodern point of view because metaphors in sign languages are inextricably linked with the issue of iconicity which has been at the center of post-modern linguistics especially in the debate of oral languages versus signed languages.

The structure of the chapter is divided into two sections, in the first the author gives a brief, non-exhaustive, introduction to the importance of “metaphors” as a

66 rhetorical device from the Greek onwards. In the second section, the attention will be focused on sign languages, their post-modern iconic nature and in particular on

TSL, by providing real examples which will be duly analyzed thanks to the help of local professional interpreter Ginger Hsu.

4.2 The importance of metaphors and figurative speech

Metaphors play a key role in the way language shapes our everyday conceptual structures and their relationship with iconicity has been further emphasized in postmodern linguistics which focuses on issues such as the relation between meaning and intention.

The centrality of metaphors has also been proved by research in psycholinguistics and cognitive linguistics in postmodern times, according to which the relationship between language and the structure of our conceptual system can be explained by metaphorical relationships (Gibbs, 1994; Johnson, 1987; Lakoff and

Johnson, 1980; Lakoff, 1987 to name just a few).

One popular postmodern language trope is metaphor (Lakoff and Johnson, 1980,

1987) and as we can read in Lakoff and Johnson (1980:3) “metaphor is pervasive in everyday life, not just in language but in thought and action. Our ordinary conceptual system, in terms of which we both think and act, is fundamentally metaphorical in nature; [that is to say] human thought processes are largely metaphorical. […] Our concepts structure what we perceive, how we get around in the world, and how we relate to other people. Our conceptual system thus plays a central role in defining our everyday realities” by way of using metaphors amongst other postmodern devices. Mac Cormac (1995:2) also noted that “metaphors can operate as cognitive processes that offer new insights”. 67

Metaphors can be defined in many different ways according to the perspective we decide to use. First of all, generally speaking, they can be defined as a figure of speech which uses an image, a story or a tangible thing to represent intangible things or abstract concepts. Metaphors may also be defined as the hypernym of other figures of speech that achieve their effects via association, comparison or resemblance, such as, antithesis, hyperbole, metonymy and simile.

Lakoff and Johnson (1980), on the other hand, defined conceptual metaphor as a postmodern principle of organization for the way in which we understand abstract concepts in terms of concrete concepts. In other words, a conceptual metaphor can be seen as “a mental mapping between two experiential domains; the target domain

(the more abstract and less well understood concept) is structured […] by the source domain” (O’Brien 1999:160).

More recent research has confirmed the importance of metaphors in our thought processes and of figurative language, in general, which is perceived as a “reflection of creativity into the language and thought of the user” (Marschark, 2005).

Basically, as suggested by Wilcox (2000), we use metaphors to make sense of what goes on around us. Signed languages, given their full and unanimously recognized status of languages, make ample use of metaphors, just like spoken languages.

However, as duly mentioned by O’Brien (1999), all psycholinguistic research on conceptual metaphors has focused exclusively on spoken language, some of them have investigated the role of conceptual metaphors in proverb understanding

(Colston, 1995; Gibbs and Beitel, 1995; Gibbs, Colston and Johnson, 1996), others the interpretation of euphemism and dysphemism (Pfaff, Gibbs and Johnson, 1997), the processing of caused motion verbs (O’Brien, 1993) and people’s interpretation of poetry (Gibbs, 1996). However, no similar study has been conducted on sign

68 languages yet.

The purpose of the present chapter is to fill a gap in the sign language literature insofar as, to the author’s best knowledge, only some papers on the issue of iconicity in Taiwan Sign Language have been published (Tai, 1993, 2005a,b), but no academic paper, let alone thesis, has ever been written on the issue of metaphorical use in TSL.

The present research was also motivated by the theoretical assumption according to which “metaphors and sign language structure are indissoluble: we can hardly find a level in the language where imagery does not play a part” (Woll, 1985:603).

The structure of the present chapter has been arbitrarily divided into two sections.

In the first one, the attention will be focused on the historical development of the notion of metaphors from ancient times, by giving a brief diachronic literature review, and in the second section the focus will be shifted to sign languages, mainly

TSL, where I will point out, for instance, that according to the context different strategies are used when interpreting metaphors into TSL. Some of the strategies include transfer mechanisms, clarification, localization, cultural adaptation, omission and replacement (when a simile substitutes a metaphor or vice versa).

Before delving further into the issue of iconicity and metaphors in TSL, let us shed some light on the crucial role that metaphors played in the narrative events for ancient people, such as Greeks and Sumerians, which underlines once again the universal character, both in diatopic and diachronic terms, of metaphor as a conceptual rhetorical device, even though its importance as a language trope has been mainly emphasized in postmodern language studies.

69

4.3 Diachronic literature review

Metaphors have always played a crucial role in mental conceptual processes ever since the Sumerian people. As we can read in Mitchell’s (2004) translation of the

Epic of Gilgamesh:

Beloved friend, swift stallion, wild deer, / leopard ranging in the wilderness — /

Enkidu, my friend, swift stallion, wild deer, / leopard ranging in the wilderness — /

together we crossed the mountains, together / we slaughtered the Bull of Heaven, we

killed / Humbaba, who guarded the Cedar Forest — / O Enkidu, what is this sleep that

has seized you, / that has darkened your face and stopped your breath?—

As we can see from this narrative quotation, the friend is compared to a stallion, a wild deer, and a leopard to indicate that the speaker sees traits from these animals in his friend. The death of Enkidu is in turn described as a sleep, as something that seizes, as something that darkens one's face, and as something that stops one's breath.

These are all metaphorical and metonymical expressions to describe something intangible, like death is.

The Greeks also highly emphasized the importance of metaphors. In Cratylus, one of Plato’s dialogs, there is the distinction between primary names and secondary names, where the superiority of the metaphor is stated:

Socrates: but if the primary names are to be ways of expressing things clearly, is there

any better way of getting them to be such than by making each of them as much like

the thing it is to express as possible? Or do you prefer the way proposed by

Hermogenes and many others, who claim that names are conventional signs that

70 express things to those who already knew the things before they established the

conventions? Do you think that the correctness of names is conventional, so that it

makes no difference whether we accept the present convention or adopt the opposite

one, calling big what we now call small, and small what we now call big? Which of

these two ways of getting names to express things do you prefer?

Cratylus: a name that expresses a thing by being like it is in every way superior,

Socrates, to one that is given by chance…

Socrates: […] And even if usage is completely different from convention, still you

must say that expressing something isn’t a matter of likeness but of usage, since usage,

it seems, enables both like and unlike names to express things. Since we agree on these

points, Cratylus, for I take your silence as a sign of agreement, both convention and

usage must contribute something to expressing what we mean when we speak… I

myself prefer the view that names should be as much like things as possible, but I fear

that defending this view is like hauling a ship up a sticky ramp, as Hermogenes

suggested, and that we have to make use of this worthless thing, convention, in the

correctness of names. (Plato, Cratylus, 433-35, trans. By John H. Cooper (Cooper and

Hutchinson 1997: 149-51))

In ancient Greece, even before Plato, metaphors can be traced back to Aristotle who, in his “Poetics” (ca. 335 BC), defined “metaphor” as“ the application of a strange term either transferred from the genus and applied to the species or from the species and applied to the genus, or from one species to another or else by analogy”16, or as translated by Bywater (1984) as “the application of an alien name by transference either from genus to species, or from species to genus, or from

16 Aristotle in 23 Volumes, Vol. 23, translated by W.H. Fyfe. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1932, 1457b. 71 species to species, or by analogy, that is, proportion” or again according to

Halliwell’s (1996) “the application of a word that belongs to another thing: either from genus to species, species to genus, species to species, or by analogy.”

Irrespective of whose translation we choose to adopt, the key aspect of a metaphor is a specific transference of a word from one context into another. In his Poetics 21,

1457b9–16 and 20–22 the four different Aristotelian metaphors are exemplified as follows:

Table 2

Type Example Explanation (i) From genus to species There lies my ship Lying at anchor is a species of the genus “lying” (ii) From species to genus Verily ten thousand noble Ten thousand is a species of deeds hath Odysseus the genus “large number” wrought (iii) From species to species (a) With blade of bronze (a) “To draw away” is used drew away the life for “to cleave”

(b) Cleft the water with the (b) “To cleave” is used for vessel of unyielding bronze “to draw away.” Both, to draw away and to cleave, are species of “taking away” (iv) From analogy (a) To call the cup “the (a) The cup is to Dionysus as shield of Dionysus” the shield to Ares

(b) To call the shield “the (b) The shield is to Ares as cup of Ares” the cup to Dionysus

With regard to the four kinds of metaphors which Aristotle distinguishes against each other, the last one (transference by analogy) is the most eminent one so that all important theories on metaphor have a reference to this characterization.

72 Later on, with the rise of Christianity, metaphors have been increasingly used

as a rhetorical tool for the exegesis of the Sacred Scriptures. Saint Augustine

excelled in this and thus became very influential in the development of Western

Christianity.

In recent decades, and especially in postmodern studies, metaphor, i.e. the description of one thing as something else, has become of interest also to analytic and continental philosophy. However, for reasons of space we will not focus on philosophical issues and on the importance of authors such as Kierkegaard,

Nietzsche, Heidegger, Merleau-Ponty, Bachelard, Paul Ricoeur, and Derrida, and their philosophical discussion on metaphor, in this chapter. Our main emphasis in the second part of this study will be put on the role of metaphors in sign languages in general, with a particular emphasis on their iconicity and, most particularly, in

TSL by providing real examples aimed at supporting my analysis. I will point out, for instance, that according to the context different strategies are used when interpreting metaphors into TSL. Some of the strategies include transfer mechanisms, clarification, localization, cultural adaptation, omission and replacement (when a simile substitutes a metaphor or vice versa).

73

As a preliminary introduction, general features of sign languages, and in particular Taiwan Sign Language (TSL), will be summed up, as many readers may be unfamiliar with sign languages and sign language research. Considering the fact that this chapter can be read as a single paper, it seems opportune to emphasize the main points of sign language research.

As previously mentioned, a sign language is a language which transmits information via sign patterns, thus by using a different channel, simultaneously combining hand shapes, orientation and movement of the hands, arms or body, and facial expressions to fluidly express a speaker's thoughts. The very first linguist who studied signed languages by giving them their called-for status of “languages” was

William Stokoe, according to whom wherever communities of deaf people exist, sign languages naturally develop and their complex spatial are markedly different from the grammars of spoken languages (Stokoe, 1960, 1976). Stokoe’s work was ground-breaking because “almost everyone, hearing and deaf alike, at first regarded Stokoe’s notions as absurd or heretical and his books when they came out as worthless or nonsensical [as is] often the way with the work of genius” (Sacks,

1989:63).

Nowadays, most scholars accept the fact that “Sign17 is natural to all who learn it (as a primary language) and has an intrinsic beauty and excellence sometimes superior to speech” (Sacks, 1989: 29). It is “seen as fully comparable to speech (in terms of its phonology, its temporal aspects, its streams and sequences), but with a unique, additional powers of a spatial and cinematic sort – at once a most complex and yet transparent expression and transformation of thought” (Sacks, 1989: 72). It has been proved that Sign is a language even at the neurolinguistic level by Bellugi

17 Sign, capital letter, is intended as a language with a different mode of expression.

74 and her team (Bellugi et al. 1991a, 1991b, 1992, 1993, 1997a, 1997b, 2001, 2010), which was investigated in detail in the first part of the present thesis. We have seen that “the left hemisphere in signers ‘takes over’ a realm of visual-spatial perception, modifies it, sharpens it, in an unprecedented way, giving it a new, highly analytical and abstract character, making a visual language and visual conception possible”

(Sacks 1989: 76), which can be perceived as a proof of the plasticity of the brain.

In the last couple of decades, postmodern scholars around the world have increasingly been focusing their attention on signed languages, by analyzing their structure, their syntax, their semantics and also some of the strategies or difficulties underlying interpreting skills between oral and signed languages. Different countries have led to the discovery of different aspects because every country has a different sign language which has developed itself independently from the language of that country. For example, Taiwan Sign Language (TSL), which is the object of our attention, is more structurally similar to Japanese Sign Language (JSL) than to

Chinese Sign Language (CSL). As pointed out by linguists, signs have their own phonology, which is properly speaking called cheirology. The phonemes of sign languages, i.e. the smallest segmental unit to form meaning, are the of hands in space, the configuration of hands, a.k.a. handshape, the orientation and the movement of hands. Therefore, minimal pairs differ for one of these three aspects.

Some scholars have also devoted their attention to interpreting from and into signed languages. In Italy, for example, a reality with which the author is more familiar with, there have been many studies researching different aspects of interpretation from and into (LIS), which in Italian is called

Lingua Italiana dei Segni (Amorini, Cortazzi and Eletto 2000; Bove and Volterra,

1984; Stocchero, 1991, 1995; Cameracanna and Franchi, 1997a, 1997b; Carli,

Folchi and Zanchetti 2000; Cokely 2003; Del Vecchio and Franchi, 1997; Franchi, 75

1992, 1993; Gran and Bidoli, 2000; Sala 2005; Woll and Porcari Li Destri, 1998; to name just a few). In Taiwan much less has been done, especially on Taiwan Sign

Language Interpreting (TSLI) that, to the author’s best knowledge, is quite unexplored as an area of research. However, in American Sign Language (ASL) or

Italian Sign Language (LIS), which have been more thoroughly explored than TSL, interpreting scholars have been focusing mainly on interpreting in the educational sector (Cokely, 2005; Davis, 2005; Forestal, 2005; Lee, 2005; Marschark, et al

2005a, 2005b; Monikowski and Peterson, 2005; Napier, 2005; Quinto-Pozos, 2005;

Turner, 2005; Winston, 2005). Furthermore, to the author’s best knowledge, no paper in Taiwan has ever been written on metaphor as a postmodern rhetorical device in TSL.

Therefore, this chapter aims at filling a gap in the literature by exploring first the issue of iconicity in sign languages and then, more specifically, the use of metaphor in TSL. Before moving on to the postmodern issue of iconicity and metaphors in sign language, it seems opportune to shed some light on Taiwan Sign Language by summing up some of the most important pieces of information analyzed so far, for the reader’s sake.

As far as the teaching of TSL as a second language is concerned, according to local professional interpreter Ginger Hsu (personal communication, 2012), in the traditional approach, it is usually taught starting with songs, which does not seem the most effective or quickest way of learning it. Indeed, Ginger Hsu told me that when she first started studying TSL, she did an eight week course and she thought that maybe after these eight weeks she could communicate with deaf people.

Unfortunately, she found out that it was not that easy. At present, there are courses for those who want to become interpreters even though the problem is that not many young people take part in these courses, it’s mostly elderly people, resulting in a

76 slow learning process. There are also many alternative learning resources, such as the TV show Ting Ting Kan (聼聼看, listening eye), which presents different topics every episode both in Mandarin Chinese and in TSL18.

Also, according to Ginger Hsu (personal communication, 2012), in the first weeks of class, apart from emphasizing the importance of Deaf culture, some attention is also put on the issue of iconicity in sign languages, which will be further explored in the next paragraph.

4.4 Iconicity in sign languages

In the not-so-distant past, people believed that sign languages were just a form of mimicry, unaware of the complexity of their grammatical structures. Today, the international community has widely recognized the grammaticality of sign languages. In the seventies and later in postmodern language studies, linguists were beginning to examine the structure and rules of signed languages (Friedman 1977;

Siple 1978).

However one should not diminish the role of visual imagery which does indeed play a major part in sign languages, more than in spoken or written languages. Many signs appear to bear an iconic relationship with their referents (Woll, 1985) and they do. Therefore, signs can be divided into arbitrary and iconic. “Arbitrary signs [are] those signs that appear to bear no relationship to their meaning” (O’Brien, 1999:

162).

18Here is an example of the show: http://www.youtube.com/watch?v=JkShbUsi6KMandlist=UUArdF7Z88T_HlNHhePpgF3gandindex= 3andfeature=plcp (accessed in Sept, 2012). The show has won the Golden Bell Award for Best Host in 2012. 77

For example, in TSL, the sign for red is to extend the index finger, touch the lower lip and then move it towards the chest. This sign is arbitrary because it is not obviously related to “red” per se in any way.

On the other hand, iconic signs directly depict what they represent or are metonymically related to what they represent, like the sign for airplane in TSL and many other sign languages depict an aircraft taking off. Iconic devices were studied by Mandel (1977) and more recently by Taub (2001). Sutton-spence and Woll (1999) adopted Mandel’s framework and divided iconic signs into four categories, namely substitutive depiction, virtual depiction, presentable action and presentable objects.

In the substitutive depiction category, hand-shapes and hand-forearm configurations are used to depict schematic images of the referent. An example in

TSL of this iconic device is the sign for SCISSORS19, where the index and the middle finger are extended, thus resembling a physical pair of scissors and is used as the iconic base of the signified (Su and Tai, 2006). The virtual depiction representation is to trace the shape of the referent in the signing space. Another example from Su and Tai (2006) is the sign for LIGHTNING which is represented by tracing a zigzag shape with index fingers of both hands. Another iconic device is called the presentable action where the signer imitates actions performed by humans or animals. An example could be the sign for BASEBALL or for the verbs RUN and/or FLY. The last iconic device is called presentable object where the signer points directly to the location of the referent, for example the sign for NOSE.

More specifically, iconicity should be further subdivided into transparent, i.e. a given sign can be understood by a person with no knowledge of that sign language, and translucent, i.e. the iconic link can be identified only when the meaning is

19 The gloss of signs is always written in capital letters.

78 known (Woll, 1985), although, it must be said that in some signs the sense of iconicity is destroyed because the original visual imagery link has ceased to exist. It is true, however, that images play a major role in all modalities of languages because of the importance of metaphors. According to Woll (1985: 603) “metaphor and sign language structure are indissoluble: we can hardly find a level in the language where imagery does not play a part”. Images are indeed important, however they can be direct or metonymic, and that is to say a direct image represents the whole of a referent, whereas a metonymic one represents either a part of a referent or something associated with it. For instance, in most sign languages the sign BIRD is represented by the thumb and the index finger which stands for the opening and the closing of a bird’s beak. Apart from the vast repertoire of visual imagery that sign languages recur to, just like every other language, they also use metaphors in the more classical sense, which is what we will focus our attention on in the next paragraph, with a special emphasis on TSL.

4.5 Metaphors in sign languages

If metaphor is an essential component, not only in postmodern studies, of spoken languages (Lakoff and Johnson, 1980), its centrality in sign languages seems even greater.

Indeed, sign languages have a double mapping, i.e. the incorporation of both iconicity and metaphorical extension at the same time. Most of the research on metaphor has been conducted on American Sign Language (ASL). One of the pioneering works was carried out by Braem (1981) on the presence of metaphor and iconicity in sign language. Minor researches have also been carried out in other sign languages, for example in Italian Sign Language (LIS) thanks to the efforts of 79 scholars like Russo (1997). However, the bulk of publications is on ASL. For the researches on the ASL use of metaphor, readers can refer to the abundant relevant literature on the production and comprehension of metaphors by ASL users

(Akamatsu and Armour, 1987; Everhart and Marschark, 1988; Inman and Lian, 1991;

Iran-Nejad and Ortony, 1981; Ittyerah and Mitra, 1988; Kramer and Buck, 1976;

Marshark, Everhart and Dempsey, 1991; Marschark and West, 1985; Marschark,

West, Nall and Everhart, 1986; Rittenhouse, Kenyon and Healy, 1994; Rittenhouse and Stearns, 1990; the list is exhaustive).

In most cases, when analyzing metaphors in sign languages, scholars prefer to focus on the so-called conceptual metaphors which are usually written in capital letters when analyzed. For instance, “the sign for past is reflective of the conceptual metaphor ‘PAST IS BEHIND’” (O’Brien, 1999:162).

Usually, metaphorically motivated signs are more easily distinguishable from arbitrary and iconic signs by people who do not know a given sign language, as proved by some empirical researches, like O’Brien (1999).

For example, the sign for past, as previously mentioned, is a gesture indicating a space behind a person and according to the afore-mentioned studies, it seems to be consistent with the metaphorical conceptualization people have about this specific concept, because as Lakoff and other scholars have noted in the past, “the basis for metaphorical conceptualization is experience common to all humans regardless of the particular language used in their community” (O’Brien, 1999:171).

As far as the comprehension and production of metaphors by deaf people is concerned, usually speaking school-age children and adolescents, probably due to their literacy-related challenges and because of their relatively limited world knowledge, demonstrate poorer performance than hearing peers on tests of metaphor comprehension evaluated via print materials; on the other hand, they seem to

80 produce figurative constructions just as often as their hearing peers (Marschark,

2005c).

Anyway, nothing has been written on TSL, where the few existing studies focus exclusively on the issue of iconicity, which is however perceived as an important postmodern theme, and on the arbitrariness of signs. Hence, in the next paragraph, I am going to provide some real examples from TSL, which have been analyzed together with professional interpreter Ginger Hsu, aimed at providing support for my analysis.

4.6 Examples from TSL

So far, I have reviewed the concepts of iconicity and metaphor. Now, it seems opportune to provide real examples aimed at clarifying my discussion. I will provide some examples as an attempt to understand how metaphors are used in texts and how they are interpreted, to observe the difficulties the TSL interpreters may face and discuss the possible solution for interpreting the Chinese metaphors into TSL.

For the collection of examples and the relative analysis, I proceeded in the following way. First of all, I interviewed twenty Mandarin Chinese native speakers, with the same level of education, and ask them to provide twenty often-used and representative sentences containing similes and metaphors. After that, I collected all the data, and chose the eighteen sentences that most interviewees had picked. The list is neither exhaustive nor the only one of its kind which could be compiled, it is merely the fruit the subjective choices of my interviewees. Later, I met with professional interpreter Ginger Hsu and together we analyzed the similes and metaphors in the aforementioned sentences, to try and see how they would be rendered in TSL. 81

I will report the sentences in Mandarin Chinese with a translation in English and the gloss for the interpreted version into TSL20. After having transcribed all the eighteen examples, I will proceed to their analysis and finally draw a conclusion.

(1.) C: 他動也不動,彷如 石像21

He does not move, just like a stone.

TSL: 他/木訥/不動/ /像/石頭/樣子//

HE/ STIFF/NO MOVE/ /LIKE/ROCK/SIMILAR//22

(2.) C: 太陽就像一個大火球,會發熱,會發亮。

The sun is like a big fireball, it heats and it lights up.

TSL: 太陽/火球/一模一樣//熱/亮/會++//

SUN/FIREBALL/IDENTICAL//HOT/LIGHT/CAN++//

(3.) C: 我就如一朵向日葵,向著太陽。

Just like a sunflower, I move toward the sun.

TSL: 我/向日葵/相似//太陽/面對//

I/SUNFLOWER/SIMILAR TO//SUN /FACING//

20 C stands for Chinese and TSL for Taiwan Sign Language. The glosses for the signed language interpreted version are always capitalized. 21 The Deaf people understand these interpretations, because when I was analyzing them with my interpreter, two Deaf people also took part in the discussion and they confirmed their understanding of the afore-mentioned metaphors.

22 The symbols /, //, and ++ are used when glossing sign languages and they respectively represent a brief pause, the end of a meaningful unit and a prolonged sign used for ,usually, motion verbs which entail a repetitive action (like jump for example).

82 (4.) C: 我是一隻小鳥,跳來跳去吱吱叫。

I am a bird, (always) jumping and squeaking.

TSL: 小鳥/我/是++//跳++//張嘴//

BIRDIE/I/BE++//JUMP++//OPEN-MOUTH//

(5.) C: 天上張著灰色的幔。

The skies are covered with a gray mantle.

TSL: 灰色/幔/那/天空/飄//

MANTLE/GRAY/THERE/SKY/FLOATING//

(6.) C: 不能讓這些充滿暴力和色情的漫畫毒害我們的幼苗。

We cannot let these violent and pornographic comic books poison our seedlings

(children).

TSL: 漫畫書/裡面/畫畫/暴力/黃色//孩子/看/吸收/禁止//

COMICS/INSIDE/DRAWING/VIOLENCE/PORNOGRAPHY//CHILDREN/SEE/

ABSORB/PROHIBIT//

(7.) C: 父愛如山,母愛如海。

The love of a father is like a mountain and the love of a mother is like the sea.

TSL: 父親/愛/山/相似//母親/愛/海/相似//

FATHER/LOVE/MOUNTAIN/SIMILAR//MOTHER/LOVE/SEA/SIMILAR//

(8.) C: 星星像一雙明亮的眼睛在夜空中照耀。

The stars shine in the night sky like a pair of bright eyes.

TSL: 星星/眼睛/好像//夜空/一閃一閃++//

STARS/EYES/AS IF//NIGHT SKY/TWINKLE TWINKLE++//

83

(9.) C: 小姑娘的心靈像棉花一樣純潔。

The little girl's mind is as pure as cotton.

TSL: 小姑娘/心/裡面//棉花/白/乾淨/一樣//

GIRL/HEART/INDISE//COTTON/WHITE/CLEAN/THE SAME//

(10.) C: 加油。

Keep on! (literally add oil).

TSL: 一手握拳朝身體動//

HAND HOLDING A FIST MOVING TOWARD THE BODY/ /

(11.) C: 春天是一個優美的舞蹈, 讓直接充滿微笑。

Spring is a beautiful dance, full of smiles.

TSL: 春天/換/跳舞/優美//微笑+++

SPRING/EXCHANGE/DANCE/BEAUTIFUL//SMILE+++

(12.) C: 春天是一只快樂的小鳥,讓世界充滿活力。

Spring is a happy bird, making the world full of vitality.

TSL: 春天/換//小鳥/快樂// 世界/活力/一整個//

SPRING/EXCHANGE//BIRD/HAPPY//WORLD/DYNAMIC/ENTIRE//

(13.) C: 書籍是屹立在世界的汪洋大海中的燈塔。

Books are the lighthouse that stands in the vast ocean of the world.

TSL: 書/一本本+++/如/大海/一片/燈塔/一座//

BOOK/VOLUMES+++/LIKE/OCEAN/CLASS./LIGHTHOUSE/CLASS//

(14.) C: 愛護書籍吧,它是知識的泉源。

84 Take good care (love) of books, they are the fountain of knowledge.

TSL: 書/保護/要//知識/來源//

BOOK/PROTECT/MUST//KNOWLEDGE/SOURCE//

(15.) C: 書是智慧的鑰匙。

Books are the key to wisdom.

TSL: 書/幫忙/智慧/打開//

BOOK/HELP/WISDOM/OPEN//

(16.) C: 春天到了,大地變成了一片綠毯。

Spring is here, the land has become a green carpet.

TSL: 春天/來了//大地/綠色/地毯//

SPRING/ARRIVED//EARTH/GREEN/CARPET//

(17.) C: 老師是辛勤的園丁,教導著我們。

Teachers are hard-working gardeners, teaching and guiding us.

TSL: 老師/工作/辛苦/園丁/一般//教+++//

TEACHER/WORK/HARD/GARDENER/LIKE//TEACH+++//

(18.) C: 心像玻璃一樣碎了。

The heart has broken into pieces, like glass.

TSL: 心/碎/散/玻璃/好像//

HEART/BREAK/SCATTER/GLASS/SEEMS//

The above eighteen examples were translated into TSL, or more precisely into 85

NTSL (Natural Taiwan Sign Language). As the reader can infer from the glosses, the word order is quite different from Mandarin Chinese and the syntax is also quite different. The aforementioned Chinese sentences were translated according to the syntactic rules of natural TSL23 as opposed to manually coded Chinese. Indeed,

TSL, just like any other sign language, can be divided into manual sign language, a.k.a manually coded language, and natural sign language. While manual sign language can be defined quite accurately, with several renowned major approaches, it seems more problematic to define with the same accuracy the properties of natural sign languages. Manual sign languages are representations of spoken languages in a gestural-visual form. In other words, they can be defined as a “sign language” version of spoken languages. These languages are not natural, insofar as they were invented by hearing people and strictly follow the grammar of the written form of oral languages. They have not evolved naturally in Deaf communities. In the past, manual sign languages have been mainly used in Deaf education, as previously analyzed in the second chapter, thus causing a major trauma in the development of deaf children’s native language, and by past generations’ sign language interpreters.

However, in the aforementioned examples, the word order is strictly natural. As far as figurative speech, similes and metaphors are concerned, we can see some interesting phenomena. First of all, as the reader can see in most sentences the metaphors are either the same or at least kept, though under a different form, in TSL.

This underlines the importance of figurative speech, in a language which is highly iconic. In sentences (1) – (3), the rhetorical device is the simile, namely “like a tone”, “like a fireball” and “like a sunflower”, and as we can see from the glosses it is kept in identical form in TSL. In the fourth sentence, the metaphor “little bird”

23 Thanks to the help of professional interpreter Ginger Hsu.

86 as something small and cute, is kept, though the ending of the sentence is slightly changed. In English, it reads as “I am a bird, (always) jumping and squeaking”, the onomatopoeic verb “squeak” does not mean much to the Deaf, that is why when rendering the sentence into TSL it also has to be adapted to the target culture, thus it becomes “BIRDIE /I /BE++//JUMP++//OPEN-MOUTH //”. The symbols /, //, and

++ are used when glossing sign languages and they respectively represent a brief pause, the end of a meaningful unit and a prolonged sign used for ,usually, motion verbs which entail a repetitive action (like jump).

In sentence (5), “the skies are covered with a gray mantle” the metaphor is kept, though strategically the interpreter added a “FLOATING” in the end to sort of make the interlocutor aware about the metaphorical nature of the noun mantle in this occurrence. In sentence (6), the metaphor “seedlings for children” is not kept, and it goes lost without being replaced by any other metaphor.

Moving on to the other examples, the metaphor is kept in sentence seven, just like the similes in eight and nine. The Chinese expression jiayou (加油, come on! though literally it means add oil) goes lost because it would be meaningless if rendered literally in TSL. The eleventh sentence is an interesting example of cultural adaptation. “Spring is a beautiful dance, full of smiles” if translated into natural TSL becomes “SPRING/EXCHANGE/DANCE/BEAUTIFUL//SMILE+++”.

The EXCHANGE sign is actually a synonymous for “like, to be similar to”, that is to say it signals a simile, however it is not as direct as the sign for the latter. The same thing happens in sentence twelve. In the thirteenth example, the interpreter uses a transfer mechanism, i.e. a metaphor becomes a simile in TSL. “Books are the lighthouse that stands in the vast ocean of the world” becomes

“BOOK/VOLUMES+++/LIKE/OCEAN/CLASS./LIGHTHOUSE/CLASS//”.

87

Sentence fourteen maintains the metaphor, whereas sentence fifteen is an example of clarification. The metaphorical use of “key” in “books are the key to wisdom” is explained as “BOOK/HELP/WISDOM/OPEN//”. In sentence (16), the metaphor “green carpet” is kept and it is localized, i.e. it is signed as if covering a whole carpet situated on the ground. In the last two examples, the respective metaphor and simile are both kept, even if according to Ginger Hsu (personal communication, 2012), it is hard for deaf people to understand the alignment between heart and glass. In natural TSL, they would simply and iconically sign a breaking heart. In other instances the metaphor would have to be explained.

According to Professor Chang (personal communication, 2012), the metaphorical story according to which two lovers are supposed to jump together into the river of love, however one is already deep in the water whereas the other has not jumped at all is very clear to all Chinese speakers, however when translated (literally) into sign language, most Deaf people, unless they have an oral culture, would not be able to properly understand why the other person “should jump in the river”. Strategies are therefore necessary to make up for these inter-cultural and inter-linguistic losses.

In all the aforementioned examples, it is interesting to see that according to the context different strategies are used when interpreting metaphors into TSL. Some of the strategies that we have seen include transfer mechanisms, clarification, localization, cultural adaptation, omission, replacement (when a simile substitutes a metaphor or vice versa).

As for the strategies used when interpreting from TSL into Mandarin Chinese, according to professional interpreter Ginger Hsu (personal communication, 2012), the most widely used one is what I will call “curb-ization”, i.e. to curb an expression and not say it too bluntly.

88 Deaf signers are used to a very direct kind of speech, which might sound extremely offensive for Chinese speakers if translated directly. For example, if a person is overweight, they would sign “super fat, like a pig” or, at any rate, over emphasize iconic aspects in their speech. Therefore, the most important strategy is to curb expressions that might sound as too overt, at the cost of not being faithful to the original. The best thing would be to educate the Deaf interlocutor and ask them if their message can be curbed, thus adapted to the target culture, of if they want at all costs the message to be conveyed as emphatically. However, usually interpreters are able to judge by themselves depending on the formality of the context.

Finally, I would also like to mention the issue of chéngyŭ (成語, idioms) interpreting into TSL. Chéngyŭ literally means set phrase. It is a type of traditional

Chinese idiomatic expression, mainly consisting of four characters.

Hu (1992), Xu (1999), Cui (1997), Zhu (1999), Chen (2001) and Zheng (2005) maintain that chéngyŭ consist of four syllables in spite of the fact that there are a few exceptions such as yù bàng xiāngzhēng, yúwēng délì (鷸蚌相爭, 魚翁得利, which means fishermen benefit from the fight between a mussel and a snipe) (Yang 2008:

34). In Italian, this idiom can be rendered as “tra i due litiganti il terzo gode”.

According to Yang (2008: 42) four syllable expressions account for the majority of chéngyŭ and the estimated percentage ranges from 80.3% (Huang 1982),

95% (Zhang 2004) to 98.4% (Gao, 1987).

Chéngyŭ were widely used in Classical Chinese and are still common in vernacular Chinese writing and spoken Chinese today. In Mandarin Chinese the most widely used chéngyŭ are about 5000. However, if an exhaustive list of Chinese idiomatic expressions were to be drafted, the total amount would go up to more than

30,000.

According to the scholar Hu (1992: 53) “a chéngyŭ, as a type of set phrase, 89 which is similar to guànyòngyŭ24 in nature, is often used as a complete meaning unit and has a more solid foundation of structure and usage than guànyòngyŭ”.

Chéngyŭ usually derive from ancient Chinese literature and from Chinese philosophers such as Confucius, Mencius or Laozi. The most striking feature about these idiomatic expressions is that their global semantic value generally surpasses the sum of the componential meanings carried by the single four characters, because chéngyŭ are intrinsically and inextricably linked with the myth, legend, historical fact or literary episode from which they derive.

Moreover, chéngyŭ do not follow the usual grammatical structure and syntax of the modern Chinese spoken language, being highly compact and concise.

As a consequence, chéngyŭ in isolation are often unintelligible even to Chinese native speakers, and when students in China learn these idiomatic expressions in school as part of their Classical curriculum, they also need to study the context from which the related chéngyŭ derives. Otherwise, they are not able to grasp the true meaning of the expression.

Chéngyŭ can somehow be associated with idiomatic expressions, particularly metaphors and, more particularly dead metaphors, at times with proverbs and clichés in Western languages. The major difference is that “the use of clichés in Western languages [especially in Italian] is seen to reflect one’s lack of creativity” (Cui, 1997:

56) whereas the use of chéngyŭ in Chinese has always reflected elevated status amongst Chinese-speaking populations. According to Chen (2001: 236-239),

24 Guànyòngyŭ ( 慣 用 語 ) are commonly used idiomatic expressions. Chinese set expressions are subcategorized differently depending on the school of thought of scholars (Cui, 1997; Zheng, 2005). The most common categories of Chinese set idioms (shóuyŭ 熟語) are 成語 (chéngyŭ), xiēhòuyŭ (歇後語, two-part allegorical sayings), guànyòngyŭ, yányŭ (諺語, proverbs) and súyŭ (俗語, colloquialisms).

90 “chéngyŭ are products of Chinese culture which reflect the particular aesthetic view of Chinese-speaking populations and embody their colorful imagination”. However, especially in Taiwan, this has been re-evaluated over the past 10 years. In one recent example, the Ministry of proclaimed on January 25, 2007 that

“chéngyŭ dull one’s mind” and that “teaching chéngyŭ is the failure of educational policies” (Yang 2008: 89),although this is not widely accepted.

However, it remains important to teach Chinese idioms to to-be interpreters because in “conference Chinese” they are often heard. The most problematic aspect, especially for Chinese non-native speakers, remains the fact that chéngyŭ often reflect the morale behind the story rather than the story itself. I will make an example to illustrate this concept.

The following example is an excerpt from a poem which is often used in modern

Chinese to describe suspicious situations. The poem is yuèfǔ shī《jūnzǐ xíng》(樂府

詩《君子行》) from the Han dynasty. The excerpt is the following: gūatián lǐ xià

(“瓜田李下”) which literally means “melon field, plums under”. This idiom is unintelligible even to Chinese native speakers and all the more so for people whose working B language is Chinese with no prior knowledge of the origin of the phrase.

Indeed, the original poem contains two sentences gūatián bù nà lǚ, lǐ xià bù zhěngguān (“瓜田不納履,李下不整冠”) describing a code of conduct which means “Don’t adjust your shoes in a melon field and don’t tidy your hat under the plum trees (in order to avoid suspicion of stealing)”. Whenever an interpreter hears this idiom, it would be time-consuming especially in a morphologically-rich language like Italian to translate it word for word. Therefore, in an interpreting-training course not only should students be taught the origin of the most commonly used Chinese idioms but also be given a possible ready-made rendering

91 that they can use whenever heard. As far as gūatián lǐ xià (“瓜田李下”) is concerned I usually suggest Chinese-speaking students use the simple and straight-forward adjective “sospettoso” (suspicious) or the noun “sospetto”

(suspect).

It goes without saying that the original flavor is lost in the interpreted version, however we should bear in mind that interpreters are not sheer translators. They aim at facilitating communication between people who speak different languages and have a different cultural background. Therefore, interpreters need to understand the concept of what is being said in order to convey it in their target language. This concept is essential for conference interpreting because “l’interprétation n’est pas traduction orale de mots mais elle dégage un sens et le rend explicite pour autrui”

(Seleskovitch 1968:34). According to Seleskovitch (1968) words are a mere linguistic sign and if the interpreter insists on wanting to translate ad verbatim, words might turn out to be an obstacle in his/her delivery. “Le mot est en effet un obstacle à surmonter et non une aide dès lors qu’il s’agit de comprendre l’enchaînement de plusieurs centaines, sinon de plusieurs milliers de mots”

(Seleskovitch 1968:50).

However, not all Chinese idioms are metaphorical. Some may not derive from a specific story with morale. These types of idioms may be succinct in their original meanings and be written in a way that would be intelligible only to a scholar of formal written ancient Chinese. An example could be yán ér wú xìn (言而無信) which I always straight-forwardly interpret as “inaffidabile” (unreliable) because it literally means “speak but no trust”. A non-native Chinese educated speaker would not necessarily know that yán (言) in ancient Chinese was a verb phrase. That is why it is important to psychologically prepare students and to teach them possible strategies and coping tactics to solve problems that might occur while interpreting.

92 According to (Lin, 2003), quoted in Yang, Yi-An (2008: 95)

Chéngyŭ are known to be more easily grasped by native speakers than by learners of Chinese, since they have cultural references and are not taken as four individual characters but simply one meaning unit; therefore, native-speaking receivers of chéngyŭ do not need to go through deep linguistic analysis, since what is needed is some embedded cultural association, to proceed in communication25.

Usually, when it comes to TSL interpreting, there are three interpretation strategies when it comes to chengyu: namely, literal translation, neutral adaptation,

“trans-adaptation”. In the first case, chengyu are translated literally, i.e. character by character. Usually, this strategy works for the most common chengyu. If the interpreter perceives a strange reaction, or lack thereof, it most probably means that the deaf interlocutor does not know the chengyu being used. In this case the interpreter should also play the role of educator and move on to simply explain the meaning of the chengyu for the benefit of the other party. This is what happens when interpreters recur to the strategy that we like to call “trans-adaptation”. On the other hand, neutral adaptation simply aims at translating the meaning of the chengyu without bothering to first translate character by character. This strategy has its pros and cons. It is certainly time-efficient, simple and straightforward; however, in this way Deaf people will never increase their input of chengyu and will never learn new ones, which is the case for hearing people even simply by passive input. According to my sources’ professional experience (interpreter A, personal communication,

2012) “Deaf people have an impressively powerful memory, which means that after seeing a chengyu signed even just once, they will most probably remember it forever”.

25 For a detailed discussion, refer to Moratto (2010). 93

To conclude, Deaf people mainly rely on the interpreter’s paraphrase of the chengyu in order to understand the core of the conversation, however it seems opportune for interpreters to also introduce chengyu in their signed speech, along with the paraphrase so as to enhance Deaf people’s linguistic and cultural knowledge. In other words, in the case of signed languages, interpreters should also play the role of educators and play as a bridge linking two different cultures, i.e. the hearing and the Deaf one.

4.7 Conclusion

At present in academic literature, there is a lack of research on the comprehension and production on metaphor in sign language, and in particular, TSL, on the use of figurative language by deaf individuals who use spoken language or by hearing individuals who use sign language and the development of metaphors in children acquiring sign language as a first language.

After a first experimental and empirical phase of analysis in TSL metaphorical use, it would also be interesting to investigate the acquisition and recall of metaphorical versus non-metaphorical TSL Signs by native signers and non-native signers. Also, TSL signs could be used to investigate the explanatory power of the structural similarity alternative versus the conceptual metaphor analysis. Thus, empirical research is warranted to shed light on other rhetorical aspects of TSL, including empirical data on the use of metaphor by signers, both native and non-native. It is quite an unexplored field and opens the road for a better understanding of the cognitive mechanisms underlying sign language, more specifically TSL, where research leaves to be desired, and for a more thorough analysis, in a postmodern approach, of the same cognitive mechanisms in all other

94 modality of languages in a comparative way. Also, as we will further explore in the sixth chapter challenging areas such as figurative speech and metaphors have to be taken into consideration in the evaluation process.

In the next chapter, we will focus on the neurobiological studies which have irrefutably proven the fact that sign languages are natural languages at all effects and not a human construct. An exhaustive literature review will be provided for the benefit of the reader.

95

CHAPTER FIVE Experiments

5.1 Introduction

Originally, this chapter was intended to prove that the neurological efforts underlying oral languages interpreting and sign to oral or oral to sign language interpreting is alike, although the findings of the study herein presented simply prove the intrinsic difficulty of signed interpreting.

What motivated me in this research was the fact that recently I happened to read an official statement of the Taiwanese government, namely the department of

Labors Affair, confirmed by my sign language interpreting colleagues, according to which a sign language interpreter could only get a financial subsidy of not more than

1600 NTD26 per hour. Moreover, as I was told by my source (interpreter A, personal communication, 2012), most of the time sign language interpreters cannot even have another interpreter with whom to shift every twenty to thirty minutes.

My assumption for this to happen is that they do not consider sign language interpreting as “effortful” as oral language interpreting. When we talk about efforts, we are thinking in terms of Daniel Gile’s effort model. However, this may not be the only reason, because we should also consider who pays the interpreters as previously mentioned. Budget restrictions may also be part of the reason, insofar as the government needs to put money for other purposes as well.

26 Just for the record, oral language interpreters usually get around 25.000 NTD per working day (usually made up of two to three hours in the morning, with lots of intervals and the same in the afternoon session) and are always (at least) in two in the interpreting booth.

96 After having elucidated the initial motivation for this chapter, we moved on to a detailed analysis of the relevant literature to see the theoretical basis which could be set as a framework of our research experiment.

As far as the issue of the efforts underlying the two modally different types of interpreting is concerned, we came across different and at times contradictory results.

Even the results of this study seem to be contradictory at times. This will be further illustrated later in this chapter.

According to some studies, it seems that the effort is inferior in sign language interpreting whereas according to others, the results seem to be proving quite the contrary.

“When unimodal (speech-speech) bilinguals manipulate their two spoken

languages that share the same auditory inputs and oral outputs, they must inhibit these

two competing alternatives in sensory and motor systems. Bimodal (sign-speech)

bilinguals, in contrast, can use these two languages at the same time because of their

different receptors and articulators, [...] thus they expend less effort in inhibition27”

(Emmorey et al. 2004).

Therefore, from this study, it would actually seem that the effort is inferior in bimodal interpreting, i.e. sign to oral or oral to sign language interpreting.

Different studies brought about different results. In Kovelman et al. (2009) using a functional Near-Infrared Spectroscopy (fNIRS) brain-imaging technique which tolerates the body motion, sign-speech bilinguals were studied while performing overt picture-naming tasks within monolingual (only American Sign Language, ASL

27 The bold is mine. 97 or English in one block) and bilingual (either simultaneous or alternation) contexts.

The simultaneous condition required subjects to name pictures in ASL and in

English at the same time, whereas the alternation conditions required them to name pictures either in ASL or in English within one block. The results showed that the left posterior temporal regions, including posterior superior temporal gyrus (STG) and supramarginal gyrus (SMG) were more activated in bilingual conditions, whereas the left inferior frontal/anterior superior temporal areas were activated intensely in monolingual contexts.

Consequently, in this study, more activation seems to prove more efforts.

Figure 3

Brain organization

98

The greater recruitment of left posterior temporal regions in bilingual contexts is likely due to the semantic and phonological processing required by signed languages

(Emmorey et al. 2002; Petitto et al. 2000; Corina et al. 1999; Kassubek et al. 2004;

Emmorey et al. 2003).

The greater recruitment of left posterior temporal regions in bilingual contexts is also due to the increased need to integrate lexical and semantic information

(Abutalebi et al. 2007; Chee et al. 2003).

Therefore, in the present chapter we decided to review some of the more significant studies, showing that the neurological foundation is pretty similar for both spoken and signed languages. Also, a huge body of linguistic literature has shown that sign language is a real language in terms of its phonology, morphology and grammar. The afore-mentioned points are convincing with regard to the points we want to make in the present theses. In other words, it does not seem opportune to treat bimodal interpreters less than unimodal ones.

After reviewing the relevant literature concerning sign language as a neurobiologically “real” language in the present chapter, we will move on to the part of the thesis in which I will reduplicate Daniel Gile's Effort Model Tightrope

Hypothesis Experiment, as previously described in the first chapter, applied, this time, to TSL.

This chapter reunites two experiments, namely the qualitative pilot study and the quantitative pilot study, which proves the complicated nature of TSL interpreting process, and will conclude by emphasizing the importance of training and professional quality which will be analyzed in detail in the next chapter which will focus on TSL interpreting evaluation and assessment-related issues.

99

5.1.1 Sign languages are real languages: neurolinguistics evidence

Traditionally, neurobiological studies have always been applied to spoken languages.

This spurred some researchers to inquire on whether these mechanisms are only valid for spoken languages, or as should be the case for universal neurobiological mechanisms, for all languages, irrespective of the modality.

The problem with traditional research is that up until not long ago researchers did not view signed languages as languages the way linguists intended them to be.

Therefore, extending these neurobiological studies to signed languages had the double effect of proving the fact that signed languages are indeed languages at all effects; the only difference being their transmission channel, and on the other hand of asserting the universal nature of the neurolinguistic mechanisms scientists and researchers had come to discover.

The two research purposes only differ in terms of perspective. In other words, for all those people who did not believe sign languages to be at the same level of oral languages in terms of linguistic dignity, these studies pioneered by Ursula

Bellugi, definitely, or desirably so28, proved them wrong; for all those linguists and scholars who had no doubt on the fact that signed languages are indeed real languages, it was a way to measure the extendibility of the neurobiological bases which had been discovered for oral languages.

Bellugi has studied the neurological bases of sign language extensively, and her work has led to the discovery that the left hemisphere of the human brain becomes specialized for language, whether spoken or signed, a striking demonstration of

28 Unfortunately, there are some people nowadays who are still convinced of the inferiority of sign languages, in terms of linguistic completeness, compared to oral languages. This is the attitude which is at the basis of the discriminating policies regulating Taiwan sign language interpreters.

100 neuronal plasticity (Bellugi and Studdert-Kennedy 1980; Klima and Bellugi 1988;

Poizner and Klima 1987).

According to MacSweeney et al. (2008), lesion and neuroimaging studies indicate that the neural systems supporting signed and spoken language are very similar: both involve a predominantly left-lateralised perisylvian network. In other words, some underlying neurobiological mechanisms are modality-independent.

Sign language research has shown that language processing engages left perisylvian regions, regardless of language modality. This has been demonstrated from the level of phonology (Petitto et al. 2000; MacSweeney et al. 2008b) to discourse (Braun et al. 2001; MacSweeney 2008).

Another important discovery is that sign languages and gesture do not share identical neural networks (MacSweeny 2008). However, in this neurolinguistic perspective, gesture is not perceived as something rudimental and primitive, but rather as something important for the development of sign language because the

‘linguisticization of gesture’ (also termed grammaticalization’) seems to be at the genesis of most signs in sign languages (Janzen and Shaffer 2002).

Spoken languages and signed languages are conveyed through two different modalities, this is a fact which cannot be denied. Spoken languages use the audio-oral-articulatory channel, whilst signed languages recur to the spatial-visual one.

The articulators in sign language, for example, like the hands, the upper torso and so on, are visible whereas the vocal articulators are not. As far as the perception is concerned, sign languages need a high spatial resolution and a low temporal resolution, whereas it is exactly the opposite for spoken languages.

These differences are also reflected in everyday linguistic behaviors, which noticeably vary between the two different modalities. For instance, it is common for deaf speakers to be signing happily all at the same time at a table, because the 101 conversation of two parties will not affect a third party, who in turn will be focused on his interlocutor. This would be perceived as a very loud and noisy behavior in spoken languages because of the different articulators, or again it is possible to sign to a person who is at the opposite side of a room, while it would be considered rude and inappropriate to shout at someone who is far from us. At the same time, though, whispering is deontologically impossible29 in signed languages, because of their articulatory nature, insofar as the objects of perception are visual events and not acoustic events. In other words, it means that signing, no matter how intentionally

“whispered” can always be seen at a conspicuous distance.

These differences have brought about in deaf people a grammaticalization of spatial elements. The use of space characterizes all sign languages and serves important grammatical functions. Space can be used in a ‘topographic’ manner to map the position and orientation of objects or people in real world space or in a non-topographic, grammatical, manner in sign languages like in the sentence “Mary phoned John”, where in practically all sign languages the agent and the patient are assigned imaginary spots in the visual-spatial field in front of the signer.

Scholars and linguists have tried to extend neurobiological experiments to see if beyond these differences, the neural basis of the two modally-different languages actually shared some similarities, which might reflect the neural underpinnings of core language functions (MacSweeney 2008).

Lesion studies are an important branch of neurolinguistics aimed at mapping linguistic-related brain areas. These studies have traditionally been carried out on hearers, the result being that left hemisphere damage would affect language ability.

This led to the discovery that the left hemisphere is the one more directly linked

29 Although signers can sign with smaller movements which might be similar to whispering in spoken languages.

102 with language functions. Some neurolinguists conducted the same lesion studies on native signers and found out that left hemisphere damage leads to severely impaired language processing (aphasia) even in signers, whereas right hemisphere damage, which would be involved if signing was a mere gestural activity, does not (Atkinson et al. 2005; Corina 1998; Hickok et al. 1996, 1998; Marshall et al. 2004, Poizner et al.

1987).

In other words, neuroimaging studies also indicate a crucial role for the left hemisphere in signed language processing (MacSweeney 2008) as well as in spoken languages. More specifically, both covert and overt sign production rely on the left inferior frontal gyrus (Braun et al. 2001; Corina et al. 2003; Emmorey et al. 2003;

MacSweeney 2008; McGuire et al. 1997; Petitto et al. 2000; San Jose-Robertson et al.

2004 ), which is exactly what happens for spoken languages as well (Kassubek et al.

2004; Emmorey et al. 2007, MacSweeney 2008).

Some researchers (Capek et al. 2008; Corina et al. 2007; MacSweeney 2002,

2002b, 2004, 2006; Meyer et al. 2007; Neville et al. 1998; Newman et al. 2002; Sakai

2005; Waters et al. 2007) have shown that in addition to the left inferior frontal gyrus, comprehension of sign language in native signers also activates the left superior temporal gyrus and sulcus as can be seen in Fig 4, taken by MacSweeney (2008).

103

Figure 4

(MacSweeney 2008)

From figure 4, it seems that the neural systems supporting sign language and spoken language are indeed very similar.

As we can see from figure 5, taken from MacSweeney (2008b), these similarities also extend to phonological similarities judgments in response to pictures.

104 Figure 5

MacSweeney (2008) 105

Neuroimaging studies also prove the fact that signed languages are natural languages and not a pantomime or a gestural way to express primitive ideas.

As previously mentioned, gesture is not perceived as something rudimental and primitive, but rather as something which is at the basis of the further development of sign languages, insofar as the ‘linguisticization of gesture’ (also termed grammaticalization’) seems to be at the genesis of most signs in sign languages

(Janzen and Shaffer 2002).

According to Hickok (1996), there is no direct neurobiological link between aphasia and apraxia. There are some patients who are unable to understand pantomimes and gestures like yawning, stretching, or brushing one’s teeth, while their comprehension of the sign for “brushing one’s teeth” is untamed, as iconic as it may be.

This is confirmed by Corina et al. (2007), whose research group found greater activation in left perisylvian regions for ASL signs than for the observation of grooming gestures (e.g. scratching) and transitive gestures (e.g. eating an apple).

Figure 6

Composite image illustrating activation in native Deaf signers (n = 10) for American Sign

Language (red) and non-linguistic actions (green). The red shading reflects the contrast of

ASL activation minus non-linguistic gestures. The green shading reflects the contrast of

106 non-linguistic gesture minus ASL. ASL stimuli were single ASL signs. Non-linguistics gestures were comprised of intransitive actions e.g., “self grooming” and transitive object oriented actions “biting an apple”. The figures shown represent the rendering of the SPM output at p < .005 threshold and display cluster sizes of 20 voxels or greater.

Data reproduced from Corina et al. (2007).

Furthermore, we can see a plastic hemispheric reorganization in Deaf signers, insofar as the processing of emotional facial expressions, typically right hemisphere dominant in hearing non-signers, can be processed bilaterally or predominantly in the left hemisphere in deaf signers. This is likely to reflect reorganization because of the wide range of functions the face, along with its expressions, can serve in signed languages (MacSweeney 2008; McCullough et al. 2005).

Another factor which demonstrates unequivocally the fact that signed languages are languages at all effects is related to language learning. Mayberry’s studies

(Mayberry et al. 2002; Mayberry and Lock 2003; Mayberry 2007) and an initial neuroimaging study (MacSweeny et al. 2008b) indicate that exposure to a language early in life, be it signed or spoken, is required to establish the neural infrastructure to support not only that language but also any language learned later in life. This clearly states the equal linguistic nature of spoken and signed languages. As Sacks (1989: 88) duly points out “if Deaf children are not exposed, early, to good language or communication [speech or sign does not matter], there may be a delay (even an arrest) of cerebral maturation, with a continuing predominance of right hemisphere processes and a lag in hemispheric ‘shift’”

At the same time, findings from studies contrasting signed language and spoken language processing in hearing people with Deaf signing parents support the

107 conclusions from between group studies, that signed languages and spoken languages engage very similar neural systems (Braun et al. 2001; Emmorey et al. 2005;

Soederfelt et al. 1997).

As previously mentioned, a key figure in the neurolinguistics analysis of sign language has been, and still is, Bellugi. Her most important finding was to discover that the left hemisphere of the brain is indispensable for sign languages, as much so as it is for spoken languages. She also found out that signers use some of the same neural pathways that are needed in the processing of grammatical speech (Bellugi 1980).

The fact that sign languages are mainly processed in the left hemisphere was also proven by Neville (1978; 1988; 1989). She demonstrated that sign languages are processed more efficiently and accurately if presented in the right visual field, which means it is processed in the left hemisphere because information from each side of the visual field is always processed in the opposite hemisphere. Also, as previously mentioned aphasic signers are not impaired in non-linguistic visual-spatial abilities or, again, signers with right hemisphere strokes may have spatial disorganization, but retain perfect signing ability despite their several visual-spatial deficits (Sacks 1989).

In short, signers show exactly the same cerebral lateralization as oral speakers, irrespective of the modality through which is conveyed their language and even though their articulators are visuo-spatial in nature.

Therefore, according to neurolinguistic studies sign language is a language at all levels and at all effects as proven by the neurobiological mechanisms underlying its processing, even though it is visual rather than auditory and spatially rather than sequentially organized; and as Sacks (1989:76) points out “as a language, it is processed by the left hemisphere of the brain which is biologically specialized for just this function”.

This is also a proof of the plasticity of the human brain, because it is as if the left

108 hemisphere in signers modified the visual-spatial characteristics into a whole new analytical concept, making it a language of its own, with its own rules and developing the potentials intrinsically present in the neurobiological mechanisms of the human brain.

As far as plasticity is concerned, there is also another interesting study worth mentioning. Penhune et al. (2003) have studied congenitally deaf individuals. This research group’s study provides a unique opportunity to understand the organization and potential for reorganization of human auditory cortex. They used magnetic resonance imaging (MRI) to examine the structural organization of two auditory cortical regions, Heschl’s gyrus (HG) and the planum temporale (PT), in deaf and hearing subjects.

109

Figure 7

Heschl’s gyrus (HG) and the planum temporale (PT)

The results show preservation of cortical volume in HG and PT of deaf subjects deprived of auditory input since birth. Measurements of grey and white matter, as well as the location and extent of these regions in the deaf showed complete overlap both with matched controls and with previous samples of hearing subjects. The results of the manual volume measures were supported by findings from voxel-based morphometry analyses that showed increased grey-matter density in the left motor hand area of the deaf, but no differences between the groups in any auditory cortical region. This increased cortical density in motor cortex may be related to more active use of the dominant hand in signed languages. Most importantly, expected interhemispheric asymmetries in HG and PT thought to be related to auditory language processing were preserved in these deaf subjects. These findings suggest a strong genetic component in the development and maintenance of auditory cortical

110 asymmetries that does not depend on auditory language experience. Preservation of cortical volume in the deaf suggests plasticity in the input and output of auditory cortex that could include language-specific or more general-purpose information from other sensory modalities.

In Poizner et al. (1987), there is an interesting anecdote concerning the grammatical relocalization of the topographic space. The patient analyzed, Brenda I., had a huge right hemisphere lesion and as a consequence she neglected the left side of space. When she described the room by signing she left the left side of the topographic space completely void, however in her right topographic space she signed correctly, including spatial loci and objects in the left side of the right topographic side. In other words, her topographic space, controlled and processed by the right hemisphere, was not functioning properly because of her lesion; however, her syntactic-linguistic space functioned faultlessly.

What we have come to see so far is that linguistically speaking, sign languages are as rich and complex as any oral language, despite the common misconception that they are not "real languages". Professional linguists have studied many different sign languages and found that they exhibit the fundamental properties that exist in all oral languages (Klima and Bellugi 1989; Sandler and Lillo-Martin 2006).

In other words, sign languages are not a form of pantomime; they are conventional, often arbitrary (as are oral languages) and do not necessarily have a visual relationship to their referent, much as most oral language is not onomatopoeic.

Due to the channel of transmission, iconicity seems to be more systematic and widespread in sign languages than it is in spoken ones; however, according to linguists this difference is not categorical (Johnston 1989). The visual-spatial modality allows the human preference for close connections between form and meaning, which is the cause of a slightly higher percentage of iconicity, present but 111 suppressed in oral languages, to be more full-fledged and more fully expressed

(Taub 2001).

However, one should be careful with linguistic definitions because this does not mean that sign languages are a visual rendition or a spatial representation of an oral language. They have complex grammars and syntactic rules of their own, and can be used to discuss any topic, from the simple and concrete to the lofty and abstract, from politics and economics, to religion and philosophy.

Just like any other oral language, sign languages have a hierarchical organization.

They organize elementary, meaningless units (phonemes, which in the past were called cheremes in the case of sign languages) into meaningful semantic units. Just as it happens in oral languages, these meaningless units are represented as

(combinations of) features, although often also crude distinctions are made in terms of handshape (or handform), orientation, location (or place of articulation), movement, and non-manual expression.

A linguistic feature which is found in many different sign languages is the occurrence of classifiers, a high degree of inflection, and topic-comment syntax. The existence of classifiers is a trait that sign languages share with most East Asian languages. 30 Classifiers are not used in English (for instance, "people" is a countable noun, and to say "three people" no extra word needs to be added), but are, indeed, common in East Asian languages (where the equivalent of "three people" is often "three classifier people").

More than oral languages, sign languages can convey meaning by simultaneous means, e.g. by the use of space, two manual articulators, and the signer's face and body. Though there is still much discussion on the topic of iconicity in sign

30 Although there is a difference because in signed languages classifiers occur with verbs, whilst in Asian languages they occur with nouns (Chang, Su and Tai 2005).

112 languages, classifiers are generally perceived to be highly iconic, as these complex constructions function as predicates that may express any or all of the following: motion, position, stative-descriptive, or handling information (Emmorey 2002).

Actually, iconicity has played an important debate in the history of sign languages. In the past people thought that that ‘real languages’ must consist of an arbitrary relationship between form and meaning, therefore the less iconic forms a language had, the higher linguistic dignity, so to speak, it had, scholars thought. At the same time, if a sign language consisted of signs that had iconic form-meaning relationship, it could not be considered a real language. Consequently, iconicity as a whole was largely neglected in research of sign languages by the pioneers of sign language linguistics.

According to Taub (2001), in a cognitive linguistics perspective, iconicity is not merely defined as a relationship between linguistic form and a concrete, real-world referent; it is more properly defined as a set of selected correspondences between the form and meaning of a sign.

Therefore, as confirmed by Wilcox (2004), iconicity is grounded in a language user’s mental representation (which is technically called “construal” in Cognitive

Grammar). It is defined as a fully grammatical and central aspect of a sign language rather than periphery phenomena.

In this perspective, signs have more flexibility because they can be either fully iconic or partly iconic (Wilcox 2000). This serves to say that irrespective of what the pioneers of sign linguistics believed, iconicity does not make a language less so, it is just a way for the language to express itself, and the visual modality certainly helps to make things more iconic.

113

5.1.2 A review of neurolinguistics research in simultaneous interpreting (SI)

This section is dedicated to a review of neurolinguistics research in simultaneous interpreting in the first part and to two experiments/behavorial studies applied to

TSL interpreting in the second part.

The first part dedicated to neurolinguistics interpreting studies aims at shedding light on some of the major empirical studies that have been carried out in recent years in simultaneous interpreting from a neurolinguistics point of view. Some of the issues that will be covered include the development of expertise in simultaneous interpreting (SI), translation directionality, neuronal adaptation and the cognitive complexity of SI.

The second part is focused on the tightrope hypothesis experiment along with the review of two neurobiological studies concerning the bilingual brain in bimodals, which can be applied also to sign language interpreters, seen as bimodal bilinguals.

In the present chapter, I will reduplicate, after a previous adjustment in terms of the design, Daniel Gile's Effort Model Tightrope Hypothesis Experiment (Gile 1995).

This experiment is focused on the so-called ‘competition hypothesis’ which can be represented in the following way, with the total processing capacity consumption

TotC associated with interpreting at any time represented as a ‘sum’ (not in the pure arithmetic sense) of consumption for L(anguage), consumption for M(emory) and consumption for P(roduct), with further consumption for ‘coordination’ (C) between the Efforts, that is, the management of capacity allocation between the Efforts:

(a) TotC = C(L) + C(M) + C(P) + C(C) and

114 (b) C(i) ≥ 0 i = L, M, P

(c) TotC ≥ C(i) i = L, M, P

(d) TotC ≥ C(i) + C(j) i,j = L, M, P and i different from j

(Where - equation (a) represents the total processing capacity consumption- inequality (b) means that each of the three Efforts requires some processing capacity.

Now, the idea that most of the time, interpreters work near saturation level is the so-called ‘tightrope hypothesis’, which this experiment aims to prove for sign language interpreters in relationship with their fellow oral interpreters which are analyzed in the control group. This ‘tightrope hypothesis’ is crucial in explaining the high frequency of errors and omissions that can be observed in interpreting even when no particular technical lexeme or other difficulties can be identified in the source speech (Gile 1989, 1995).

The precise aim of this investigation is to try to establish, in a sample of professionals interpreting a speech, whether there are indeed errors and omissions affecting segments that present no evident intrinsic difficulty. If there are, it is likely that they can be explained in terms of processing capacity deficits such as predicted by the EM (efforts model). The underlying rationale of this study is the following: one indication of the existence of such errors and omissions would be variability in the segments affected in the sample (at the level of words or propositions). If all subjects in the sample fail to reproduce adequately the same ideas or pieces of information, this would suggest the existence of an intrinsic ‘interpreting difficulty’ of the relevant segments (too specialized, poorly pronounced, delivered too rapidly, too difficult to render in the target language, etc.) Another indication could come from an exercise in which each subject is asked to interpret the same speech twice in a row. Having become familiar with the source speech during their first 115 interpretation, subjects can be expected to correct in their second version many errors and omissions committed in their first version. If, notwithstanding this general improvement of interpreting performance from the first to the second target-language version, it were possible to find new errors and omissions in the second version whereas the same speech segments were interpreted correctly the first time, this would be an even stronger indication that processing capacity deficits are involved. The method used will be the same used by Gile, namely target speeches will be videotaped, transcribed, and transcriptions will be scanned for errors and omissions. As Gile duly points out, this method is not without pitfalls, both because of high inter-rater variability in the perception of what is and what is not an error or omission, so to avoid these pitfalls, only instances of what appeared to me as flagrant errors or omissions will be included in the analysis, and at least two further opinions from other interpreters have been requested to confirm that the error and omissions identified were also considered such by them, so to preserve validity by reducing the probability of ‘false positives’ (mistaking text manipulations considered acceptable by the subjects for errors and omissions). Unlike Gile’s experiment, in the present reduplication a control group has also been taken into consideration to analyze the relationship between sign interpreters and their fellow oral interpreters in terms of efforts. The analysis then will proceed by trying to determine: (a) how many subjects in the sample made an error or omission for each affected speech segment and (b) what error or omissions were corrected in the second version of the target speech.

Therefore, without recurring to fMRI, the high detection threshold for e/o definition used here in order to reduce to the largest possible extent the number of

‘false positives’ means that other phenomena that could have been used to measure cognitive load were not exploited. In particular, no attempt will be made to look at

116 borderline cases, at the deterioration of linguistic (signed) output quality, or at changes in the prosody or the quality of the interpreter’s signing. If the low sensitivity of the tool will make it impossible to obtain convincing findings, more sensitive tools will have had to be designed, and reliability could have become a problem.

Therefore, the findings of this study will strengthen the case for the tightrope hypothesis and thus give some support to the Effort Models as a conceptual tool to explain not only oral interpreters’ cognitive-constraints-based limitations but also

TSL interpreters, and in Gile's words may give some credibility to the idea that the usefulness of a concept or model in scientific exploration is not necessarily a function of its degree of sophistication.

In this paragraph, I will focus on neurolinguistics research in simultaneous interpreting (SI), irrespective of the modality (oral or signed). I will first give a brief definition of SI emphasizing the underlying reasons of its cognitive complexity (CC).

After that I will move on to an overview of what general issues can be encountered when dealing with SI from a cognitive point of view and will provide the reader with a quick excursus of neurolinguistics research on translation and interpreting in the bilingual brain. The first part will conclude with an analysis of Rinne et al.’s

(2000) study: the translating brain. In the second part of the review, I will introduce two ERP studies on language processing in SI. The discussion will end with a brief discussion on language switching issues, on neuronal adaptation, and on the development of expertise in SI along with the possibility of searching for neural correlates. This section is related to the rest of the thesis because the same neurobiological mechanisms underlie the interpreting task, whether it is within oral or signed language or even bimodal. Therefore, it seems opportune to give a general picture of the kinds of studies that have been carried out so far in terms of 117 neurolinguistics research applied to interpreting studies, which can then be extended to sign language interpreting, just like in the initial part of the thesis we have traced a picture of the neurolinguistics studies which have irrefutably proven the fact that sign language is a language at all effects, the only difference being the channel of transmission.

SI is one of the different types of interpreting modes, along with consecutive, whispered, relay and liaison interpreting. It can happen in two different modalities, namely oral or signed. In the oral modality, interpreters sit in a booth where they listen to a source language and simultaneously interpret the speech into a target language. Ideally, they should have a clear view of the meeting room and the speaker and their booth (fixed or mobile) should meet ISO31 standards of acoustic isolation, dimensions, air quality and accessibility as well as appropriate equipment.

In the signed modality, interpreters face the (deaf) audience and sign whatever the speaker is talking about and he should ideally be located in a position where everybody can see him or her clearly.

SI is a complex cognitive task, with a high cognitive load, thus it may also represent an interesting field of research for neurolinguists (Fabbro and Gran 1997).

In order to provide experimental support for the hypothesis that SI is a complex cognitive task, Darò and Fabbro (1994) carried out a study with 24 student interpreters who were asked to perform a digit span task under four different conditions: listening, shadowing, articulatory suppression, and SI. A digit-span task is a task that is often used for measuring working-memory’s storage capacity.

Participants are read a series of digits (e.g. 8, 3, 4) and must immediately repeat the numbers back. If they do this successfully, they are given a slightly longer list (e.g. 9,

31 International Standardization Organization.

118 2, 4, 0), and so forth. The length of the longest list a person can remember in this fashion is that person’s digit span. According to the results, performance turned out to be poorer after SI. This was interpreted as suggesting that SI was, indeed, the most complex task, from a cognitive point of view. However, experimental studies in SI have some basic issues which have to be tackled in order to design experiments.

A critical issue, for example, is the fact that professional interpreters do not abound, let alone sign language professional interpreters, which means that it is not always possible to obtain an adequate sample for any given study, as it was the case for the present study. This may result in studies which are prone to a lack of statistical power. Other methodological problems concern the lack of ecological validity of the experimental setting and the stimulus material. As far as the technological equipment is concerned, traditional scholars believed it was impossible for interpreters to speak inside fMRI machines because the movement of the muscles would have affected the results and the objectivity of the data. This is also one the main reasons why as of now, no neurolinguistic study has been carried out on simultaneous interpreting in the signed modality. However, recently with the latest technological developments, it seems possible to speak at a moderate tone and by reducing the volume of the linguistics output. In this way, it becomes possible for interpreters or interpreting researchers to analyze more in detail the cognitive processing leading to the final linguistic output (Chan, 2011, personal communication). For a further discussion on methodological issues, which is out of the scope of the present paragraph, readers can refer to Frauenfelder and Schriefers

(1997) and Gile (2000).

SI is a type of oral or signed translation and as such it is directional, i.e. it proceeds from one language (source) into another (target), irrespective of the modality. According to Jakobson (1971), translation skills are related to posterior 119 language areas (temporo-parietal regions), as can be proven by the fact that a lesion in these areas impairs the ability to translate from one language to the other.

However, already in 1951, the Italian scholar Gastaldi had reported that some polyglot aphasics had lost the ability to translate in both directions (Gastaldi, 1951).

This seems to be in line with the hypothesis postulated by Paradis according to which there are two different translation components, one for L1 to L2 and another for L2 to L1 (Paradis 1985, 1993). These two different components are under constant control in SI because the language which is being used has to be activated whilst the other is inhibited. This is also in line with the Activation Threshold

Hypothesis (ATH), which was initially proposed by Paradis (1985, 1993) to account for differential recovery in polyglot aphasia and recently the theory has also been applied to the study of language attrition (Köpke, 2002). It specifies the relation between the frequency of use of a linguistic item and its activation and availability to the language user. Accordingly, it is assumed that linguistic items have thresholds that change on the basis of frequency and recency of use. As a general rule of thumb, low activation thresholds yield faster and easier access than higher thresholds.

Activation and inhibition mechanisms appear to account for the control of multiple languages in the brain (Green, 1986; Paradis, 1993) as well as for changing dominance patterns. ATH assumes that items (or languages) that are more frequently activated need less stimulation to be reactivated than items (or languages) that are less frequently activated (Paradis, 1985, 1993). In other words, when a particular linguistic item has a high activation threshold, more activating impulses are needed to reactivate it (Paradis, 1997; Gürel, 2004). In the literature, there are many neurolinguistics reports further establishing the validity of the translation directional components. For example, Aglioti and Fabbro (1993) reported the case of a patient who, due to a vascular lesion to the basal ganglia (a subcortical structure of the

120 frontal-lobe system of the left hemisphere) was no longer able to translate passively32.

However, in interpreting studies (IS), there is little experimental evidence in support of any directional effect. In one of the few neurolinguistics studies available,

Rinne et al. (2000) compared interpreting from and into the native language. They found more extensive activation during active translation, i.e. into L2, possibly reflecting differences in difficulty between the two translation directions. In the same study, Rinne et al. (2000) also contrasted SI and shadowing, by using PET. The brain areas that were selectively activated during SI (after the subtraction of the areas that were activated in shadowing) were those that are typically associated with lexical retrieval, working memory, and semantic processing. Similarly, in another study using PET (Price et al. 1999), it was reported that during SI interpreters indeed exhibited brain activation patterns that were modulated by the direction of translation. In other words, there was more extensive activation in the left dorso-lateral frontal cortex during L1 to L2 than during passive translation. To my best knowledge, there have only been two electrophysiological studies which have investigated SI, however none of them applied auditory presented material as stimuli or compared SI with matched controls to see if there was training-related neural adaptation. The first study was carried out by Proverbio et al. (2004). They explored native Italian SI and monolingual control subjects during a semantic congruency processing task, namely 11 right-handed professional interpreters whose L1 was

Italian and whose average age of aquisition (AOA) of L2 was 9.6. In this study code-switching was perceived as SI-specific and analyzed by way of ERP. The interpreters were presented with four conditions in a block design, i.e. Italian unmixed, English unmixed, English mixed (English final words), and Italian mixed

32 For further examples, see Fabbro and Paradis (1995). 121

(Italian final words). Each block included semantically congruent and incongruent trials. The SI N400 responses were significantly larger to L2 than to L1, due to the differences between mixed and unmixed conditions. There also seemed to be a different functional organization of semantic integration, due to the later age of acquisition of L2. Later, in Proverbio et al. (2008), the research team also investigated the temporal dynamic of the brain responses of SI while they processed a visually presented letter-detection-task (i.e. word vs. non-word discrimination).

According to the results, there seemed to be a faster and more efficient access to lexicon for L1 regardless of L2 proficiency.

The study by Rinne et al. (2000) is the only study, to my best knowledge, using

PET and focusing not only on the single word-level (like Klein et al. 1995 or Price et al. 1999) but rather on SI as a global complex cognitive task. As previously mentioned, the brain areas that were selectively activated during SI (after the subtraction of the areas that were activated in shadowing) were those that are typically associated with lexical retrieval, working memory, and semantic processing. In this study, brain activation patterns of eight right-handed healthy professionals Finnish-English interpreters (32-56 years, 4 women and 4 men), were measured with PET while simultaneously interpreting auditorily presented texts (in both directions), shadowing (in both languages), or resting in silence. According to the results, passive translation recruited a region anterior to Broca’s area as well as the left supplementary motor area. Left prefrontal activations, including area 46, are often observed during various verbal encoding and working memory tasks.

Fig. 8

122

Rinne et al. (2000)

In these images, we can see significant rCBF increases related to SI. Averaged

PET subtraction images of statistically significant increases in rCBF when (A) the

English to Finnish translation is contrasted to shadowing of English, (B) Finnish to

English translation is contrasted to shadowing of Finnish (C) and (Finnish to English translation minus shadowing Finnish) minus (English to Finnish translation minus shadowing English) is evaluated. Thus, the brain areas that were selectively activated during SI (after the subtraction of the areas that were activated in shadowing) were those that are typically associated with lexical retrieval, working memory, and semantic processing. Active interpretation, on the other hand, yielded more extensive activation increases in and around the above-mentioned left frontal regions. Left inferior temporal activity was also observed: it is the basal temporal language area which is related to word-finding and semantic processing. To conclude, this study shows that in SI cerebral activation patterns vary according to the direction of interpretation, and it is more extensive in language dominant hemisphere when the interpreters translate into their L2. Changes in cortical

123 activation as a function of translation directionality were limited to the language-dominant hemisphere, probably due to increased cognitive difficulty in active interpretation.

Moreover, only two electrophysiological studies have investigated SI and none of them applied auditory presented material as stimuli or compared SI with matched controls to see if there was training-related neural adaptation. The first study was carried out by Proverbio et al. (2004), in which the research group explored native

Italian SI and monolingual control subjects during a semantic congruency processing task. In other words, code-switching is in this study perceived as

SI-specific and analyzed by way of ERP. Interestingly, the SI N400 responses were significantly larger to L2 than to L1, due to the differences between mixed and unmixed conditions, there seemed to be a different functional organization of semantic integration, due to the later age of acquisition of L2. Later, in Proverbio et al. (2008), the research team investigated the temporal dynamic of the brain responses of SI while they processed a visually presented letter-detection-task (i.e. word vs. non-word discrimination). According to the results, there seemed to be a faster and more efficient access to lexicon for L1 regardless of L2 proficiency.

Interpreters are by definition polyglots, or at least bilinguals, at times bimodal bilinguals as in the case of sign-to-oral and oral-to-sign interpreting, insofar as they are highly proficient in at least two languages. Bilinguals are able to transfer and/or translate from one language to another in different moments of their speech or switch between the two even in the same sentence. Simultaneous interpreters have to deal with language switching challenges and strategies on a daily basis. As previously mentioned, the ERP study, carried out by Proverbio et al. (2004), aimed at investigating the neurofunctional bases of language switching mechanism, in language reception. The eleven professional interpreters who participated in the

124 study had to judge the meaningfulness of final words in short sentences. Their RT

(response time) along with their ERP components were analyzed by the authors. As previously mentioned, the study was made up of four different conditions: Italian unmixed, English unmixed, English mixed and Italian mixed. As far as the stimuli were concerned, in the Italian unmixed condition, the participants had to judge on the meaningfulness of sentences such as “le perdite ammontano a circa un miliardo di dollari” (the losses amount to about one billion dollars), in which the final wording was perfectly coherent with the rest of the sentence. As for the Italian unmixed and senseless input, the stimuli were made up of sentences such as “i tempi sono finalmente prodigati” (the times are finally squandered). In the English block, the unmixed inputs were made up of sentences such as “the Lebanese government must maintain order”, whilst the English unmixed and senseless stimuli were made up of sentences like “the proposal aims to establish a chicken”. As far as the mixed condition is concerned, the stimuli were made up of English mixed “il rimedio sarà peggiore del (the cure will be worse than the) disease”, English mixed and senseless

“le piccole imprese hanno la possibilità di (small enterprises have the possibility to) extract”, Italian mixed “I have absolute confidence in her abilità (ability)” and

Italian mixed and senseless “many workers are feeling a sense of ricamo

(embroidery)”.

The analysis of the behavioral data (RT) shows that interpreters are slower than controls (Italian monolinguals) at responding to L1 sentences. Also, both groups are faster with the right hand and to the congruent final words. However, interpreters are faster at responding to unmixed sentences and also faster in reading English sentences ending with Italian words (L2 -> L1) than vice versa (L1 -> L2). On the other hand, ERP components (N1 and N400) showed the so-called semantic incongruence effect. Interpreters have a retarded onset of negativity (the so-called 125 latency phenomenon) related to semantic incongruence both at N1 and N400 levels and in both their native language and their L2. We could speculate that this has to do with interpreters’ training insofar as in their intensive period of training they are always encouraged to wait for more input and avoid hasty conclusions. As for the

ERP amplitude of the semantic incongruence effect, both groups showed greater negativity to incongruent words. Finally, as for the code switch effect, in the interpreters, at N1 level the negativity to incongruent words was larger when switching to English than to Italian words. N400 was larger in the mixed condition and significantly larger in response to English words than Italian ones in the mixed condition. The enlarged N400 effect on English final words, an asymmetry in switching cost, is probably due to later age of acquisition in L2 or probably to an increased executive/cognitive control when switching to a less dominant language

(Moreno et al. 2002).

The age of acquisition and level of proficiency are indeed two factors affecting bilinguals’ neuronal networks in language processing tasks. Many interpreting teachers argue whether SI training could play a role in neuronal adaptation for interpreters or not. Elmer et al (2010) examined the impact of professional and long-term language training on auditory word processing and tried to disentangle its effect from that of proficiency and age of acquisition. The participants had to judge whether auditorily presented disyllabic noun pairs both within and across German and English were semantically congruent or not. Eleven professional interpreters participated in the study and they only worked passively, from English into German.

In this study, there were also eleven controls matched in L2 (English) AOA, level of proficiency and exposure (statistically there were no significant differences between the two groups. According to the results, SI could not benefit from a German prime word when English was their target language. From a behavioral point of view, both

126 groups showed significantly longer RT during English-English condition. However, controls committed significantly more errors than SI in judging semantic relatedness of EE noun pairs. Furthermore, it seems that long-term L2 to L1 SI training makes active translation more troublesome, which is consistent with previous studies, like

Rinne et al. (2000) and Proverbio et al. (2004) and which could show training-related functional reorganization, although further research is warranted.

The N400 to incongruent trials showed significant group differences in GG, GE, and

EE trials, but not in EG, in which there was an enlarged N400 in SI group between

300 and 400 ms. This might reflect training-related altered sensitivity to lexical-semantic processing, or it could also be interpreted as SI probably co-activating more lexical-semantic neighborhoods in interpreters’ mental lexicon.

We can claim that, irrespective of the modality, SI is a complex cognitive task made up of several sub-components and sub-skills, such as language comprehension, production, output monitoring and transfer mechanisms from source to target language, plus a general coordination of these different processes (Gile 1995, 1999).

It is quite understandable, then, that SI requires a long and intensive period of training leading to a difference between novice (effortful) and expert or professional

(automatic) interpreters. However, it is assumed that changes might occur in brain activities or functional structure during the period in which there is the acquisition of interpreting skills. Future research could further develop on this aspect with longitudinal studies focusing on the development of expertise in interpreting which could shed some light on the brain plasticity of oral and signed interpreters.

All this makes us understand that what goes on in the brain of sign language interpreters is no less complicated than what happens in the minds of oral interpreters. The next paragraph will focus on a recent study (Emmorey and

McCullough 2009) analyzing the bimodal bilingual brain: people who speak an oral 127 language and a signed one, which is also the case for sign language interpreters. The recent study carried out by Emmorey and McCullough (2009) focused on bimodal bilingual brain. Bimodal bilinguals are people who are fluent in a signed and an oral language. The reason I decided to review this study is because it is strictly related with what we are talking about, namely sign language interpreters, insofar as they are also bimodal bilinguals, since they work with two modally different languages.

In the article the two authors discuss the effects of on both behavior and brain organization and report an fMRI study that investigated the perception of facial expression, a domain where experience with sign language is likely to affect functional neural organization. The authors claim that sign language experience has long-term effects on mental imagery and motion processing.

Although auditory deprivation leads to an enhanced ability to detect and attend to motion in the visual periphery, acquisition of a sign language leads to atypical lateralization of motion processing within the brain. Motion processing is associated with area MT/MST within the dorsal visual pathway, and processing within this region tends to be bilateral, or slightly right-lateralized. However, several studies have found that both hearing and deaf signers exhibit a left hemisphere asymmetry for motion processing (Bavelier et al. 2001; Bosworth and Dobkins 2002; Neville and Lawson 1987).

According to Emmorey and McCullough (2009), the evidence indicates that the functional neural organization for both linguistic and non-linguistic processing can be affected by knowledge and use of a signed language. In addition, the bimodal bilingual brain appears to be uniquely organized such that neural organization sometimes patterns with that of deaf signers and sometimes with that of monolingual speakers and this happens also in sign language interpreters.

In the study, the two authors also carry out an experiment on face recognition

128 and its neural correlates which will not be reviewed in the present thesis because it goes beyond the scope of our discussion.

On a general note, we can say that bimodal bilingualism can uniquely affect brain organization for language and non-linguistic cognitive processes. The findings of the two authors along with those of MacSweeney et al. (2002) suggest that bimodal bilinguals (hearing signers) recruit more posterior regions within left superior temporal cortex than deaf signers when comprehending sign language. This different neural organization is hypothesized to arise from preferential processing of auditory speech within more anterior STS regions and possibly from the need to segregate auditory speech processing from sign language processing within this region. In other words, the neural substrate that supports sign language comprehension for bimodal bilinguals is not identical to that of deaf signers and neither to that of hearing speakers.

This is a further proof of the plasticity and reorganizational abilities of the brain.

5.2 Qualitative and quantitative experiments

5.2.1 Qualitative pilot study: quality assessment

As we will see in the following chapter, there are many factors involved in determining the quality and the success, or lack thereof, of sign language interpreting.

Before reuniting all the different factors analyzed so far, in a somewhat simplified evaluation grid, I decided to analyze together with a bilingual (fluent in both Mandarin and TSL) participant, who prefers to remain anonymous and whom I will refer to as Mark, an interpreted speech into TSL, to address the issue of quality

129 from the perspective of the user (Deaf33 evaluator).

The interpreted speech is a question and answer (QandA) session taken from the

Taiwan Presidential Election debate (臺灣總統大選辯論). Together with my deaf participant, I aimed at analyzing the simultaneous interpretation into TSL which was provided live during the debate.

First of all, I would like to underline the reason for which I chose this type of speech and not, for example, TV news which also happen to have TSL interpretation.

The reason is simple yet fundamental for any further discussion on the topic.

There are different types of simultaneous interpretation, both in oral and in signed languages, amongst which we have interpretation with or without script.

What happens with TV news interpreting is that the sign language interpreter has either previously read or prepared the news s/he is going to interpret or do it with a script, just like the anchor. This does not happen at the presidential election debate where everything happens on the spot, live, with no previous preparation, notwithstanding the sign language required previous knowledge, which goes without saying. It is more interesting to see the strategies that the interpreter uses to cope with possible difficulties that might come up during the interpretation task.

I divided the analysis into three steps. First, I carried out an overall analysis of the interpretation. Secondly, I recruited a native signer to help me with a more detailed analysis of the TSL part, and finally I recruited a hearing sign language interpreter who, in turn, provided me with some overall comments. Both participants were duly paid for their participation in this analysis.

The most interesting aspect emerging from this study is that the direct target of

33 The informant is fluent in Chinese thanks to hearing aids. However, his native language is TSL.

130 TSL interpreting, that is to say the Deaf community is not always able to benefit from it, the way they should and could, for various reasons that will be discussed later. To the contrary, hearing TSL interpreters seem to have a more thorough understanding of the message, both because they are helped by the aural input and also because of a broader background knowledge and different logic mechanisms.

The question which remains to be explored is how to transform or improve the interpreting service in a way that those who most need it, i.e. the Deaf community, may benefit from it completely and without any restraint.

I will first report the transcription of the spoken part (no subtitles were available for it, so I listened to it and transcribed it myself) and the (not always correct) gloss of the TSL interpreted version as grasped by the deaf participant. After this, I will write all the corrections that I did together with the help of the hearing interpreter.

After that, some final comments will conclude this analysis.

Following is the transcription (in Chinese).

爲了讓今天的這個,提醒各位貴賓,為了讓今天這整個的流程更順暢,等一下

I would like to remind everyone, in order to let today’s debate flow smoothly

配合/今天/ 說(講)//今天/講座/會/講/順利/等待//

COOPERATE/TODAY/SPEAK/TODAY/LECTURE/CAN/SMOOTH/WAIT//

每個參選人發言之後大家就不要鼓掌了,那麼讓我們的流程很順利地進行,

After each candidate’s speech, please do not applaud, so let our process go smoothly

每個/參選人/講/完後/來/自己/拍手/不//時間/可以/即時//

EVERY/CANDIDATE/SPEAK/AFTER/SELF/APPLAUD/NO/TIME/CAN/SIMUL

TANEOUS//

131

謝謝大家。

Thank you.

謝謝/順利//

THANK YOU/SMOOTH//

不好意思

Excuse me.

我想/

I think

李先生, 剛剛開始的時候,我們讓大家鼓掌, 沒有問題吼,但是之後因爲媒

Mr. Lee, at the very begininning applauding is not a problem, but afterwards, to enable the media

李/先生//剛才(當初)/拍手/沒問題//是++/接下來/記者員

LEE/MISTER/JUST/APPLAUD/NO PROBLEM/yes++/FOLLOWING/JOURNALISTS//

發言的時候,我們爭取一下時間。。。 to ask their questions without being interrupted

訪問/把握時間//

ASK/CONTROL TIME//

好,現在開始進入辯論的第二階段,我們現在開始由媒體提問。這個階段總共

Ok, let us enter the second phase of the debate, we open the floor to the media for any question. In total there will be ten questions.

今天/第二段/期間/開始/擔任/記者員/十/問題//

132 TODAY/SECOND PHASE/TIME/BEGIN/JOURNALIST/TEN/QUESTIONS//

十個問題,請五位媒體代表輪流提問。參選人依照事前抽籤的結果依序回答,

The five representatives of the media may take turns in asking questions. The candidates will answer following the order previously given to them.

五個/代表/排序/提問/參選人//剛才/抽/完/一定/要/跟/回答/排序

FIVE/REPRESENTATIVE/ORDER/ASK/CANDIDATE/JUST/DRAW/FINISH/MUST/ANSWE

R/ORDER//

位媒體發問的時間是四十秒,每位參選人回答的時間是一分三十秒。

The time allotted for each question is forty seconds and for the answer one minute and thirty seconds.

一個/代表/回答/四十分/參選人/回答/一小時半//

ONE/REPRESENTATIVE/ANSWER/FORTY/CANDIDATE/ANSWER/ONE HOUR HALF//

首先我們請中央通訊社的總編輯呂志翔先生提出他的第一個問題,

當初/第一個/紅/或/綠/公佈/關於/社會/我/發生//李/先生/第一個/問題

謝謝。

Thank you.

謝謝//

THANK YOU//

馬先生,您一直在宣揚在外交上的成就,但在參與政府間與國際組織方面,

Mr. Ma, you have been emphasizing your achievements in external affairs, but as for the international organizations

馬/先生/自己/傳宣/外交/自己/很好/再/政府/參加/國際組織//

MA/SIR/SELF/DECLARE/DIPLOMACY/SELF/GOOD/AGAIN/GOVERNMENT/

PARTICIPATE/INTERNATIONAL ORGANIZATIONS// 133

除了世界衛生大會以外,到目前為止還沒有重大的突破,這是否意味了活絡外

交 apart from WHO, there has been no other major breakthrough, does this mean that active diplomacy

世界/衛生/大會//到/現在/外面/是/新/拼命/沒有(零)是否/自己/有//

WORLD/HEALTH/ASSEMBLY/TO/NOW/OUTSIDE/BE/NEW/HAZARD/ZERO/

SELF/HAVE//

與急卻性要特別仰賴中國大陸的善意?對此是否有新的思考及作法? will rely on the goodwill of Mainland China? Do you have any reflection you would like to share with us?

(下巴?34)/ 挫折/是/否/靠/大陸/幫助/對++/有/問題/新/創意/計劃/

XXX/DIFFICULTY/BE/NOT/REALY/MAINLAND

CHINA/HELP/CORRECT++/HAVE/PROBLEM/NEW/ORIGINAL/PLAN/

蔡女士,民進黨執政八年中在外交領域上面,大家印象最深刻,最深刻可能是

Ms. Tsai, in the eight yuears that the DPP has been at the government, everybody’s most prodound impression

第二位/蔡/女士/民進黨/出席/八年//了解/外面/外交/印象/很/深刻//

SECOND/TSAI/MS/DPP/GOVERN/EIGHT

YEARS//UNDERSTAND/OUTSIDE/DIPLOMACY/IMPRESSION/VERY/PROFO

UND

外交甚至不惜於美國翻臉,妳要如何建立國際對民進黨處理外交事務上的向心 is that diplomacy has been hesitant to ever go against the Uniter States, how do you plan to establish the international diplomacy with the way DPP handles external affairs?發生/在/美國//恨/自己/可以/國際/社會/對/我國際/問題/相信/增加

HAPPEN/IN.AMERICA/HATE/SELF/CAN/INTERNATIONAL/SOCIETY/TOWA

34 The Deaf signer does not recognize this sign.

134 RDS/INTERNATIONAL/PROBLEM/BELIEVE/INCREASE//

最後請問宋先生,藍綠在外交政治上經常是對立的,阻礙台灣在推展外交的關

Finally, I’d like to ask Mr. Sung, the two parties are often opposed in external affairs

第三位/宋/問/綠/普通/外面/一群/恨/陷害/挫折/可以/自己/想一致//

THIRD/SUNG/ASK/GREEN/COMMON/OUTSIDE/A

GROUP/HATE/FRAME/DIFFICULTY/CAN/SELF/THINK/CONSISTENT//

係,你又如何建立一個以共識為基礎,超越黨派的外交政策?謝謝

How do you plan to build a government in which these oppositions do not hinder

Taiwan’s diplomatic policies? Thank you

作基礎/黨/取消/可以嗎//

FOUNDATIONS/PARTY/CANCEL/CAN?//

Once again, I would like to emphasize that the gloss for the interpreted version is not always correct, however I decided to report it as the Deaf signer perceived it, with no further edit, to analyze the issue of quality from the deaf user’s end, because it is always interesting to know how Deaf people evaluate the quality of interpretation. Their viewpoints may differ from what we would logically think or assume, as demonstrated by the next section.

Following is a transcription of the interview that I carried out with the first (Deaf) participant:

Q: What is the main difficulty that you experienced while watching this video?

A: The interpreter signs too fast, it is hard to follow him at times, even if I am a native signer.

135

Q: As a native signer, what is your opinion of the TSL used by the interpreter in the video in terms of naturality, lexicon and grammar?

A: As a deaf person, while I was watching the video, I thought to myself that he signs too fast. But then again, he can do nothing about it because he is a professional interpreter constrained by time limits, because it is a simultaneous interpretation. In sign language, we often omit many parts to get ahead of time, but in a similar setting the interpreter cannot omit anything, he has to be faithful to the original, thus he ends up signing too fast even for us native signers.

Q: Visual expressions are a fundamental grammatical component in any sign language, and also in TSL, but apparently in the video that we just saw the interpreter did not have any visual expression at all. Do you have any opinion or remark about that?

A: Probably it is because you are used to signed languages of other countries. In

TSL it seems to me that visual expressions can also be found, but not necessarily, some people might have them while others might not. Maybe, it is a cultural difference.

Following is the transcription of the more articulated reply provided by the hearing interpreter. All GLOSSES are capitalized.

I think the word order he uses is not strictly the same a native signer would use, that is most probably due to the fact that he was influenced by the speech, which is unavoidable, to a certain extent, when doing simultaneous interpreting (both in oral and in signed languages). Also, most Deaf people, at least in Taiwan, do not have the command of TSL required to master a conversation or monolog on more abstract

136 topics. More abstruse topics often are omitted, as a linguistic strategy. As for the

TSL interpreted transcript, the first mistake I notice is when the speaker said bu hao yisi 不好意思 (excuse me), and the gloss for the TSL interpreted version is wo xiang 我想 (I think), because in the video the interpreter actually scratches his head, which he wouldn’t do if he wanted to sign the verb THINK. So, it is more like a way of expressing the EXCUSE ME equivalent. Also, when the speaker says ganggang kaishi de shihou 剛剛開始的時候 (at the beginning), the deaf participant glossed it as dangchu 當初, I think it is a very wise choice, you could have also glossed it as di yi ci 第一次. Where you guys wrote jintian 今天 (today), it is actually wrong because in the video they sign now and not today. Today is actually a compound noun in TSL, made up of NOW plus DAY. Finally, when they are speaking about how much time they have to reply you wrote one hour and a half, where it is actually one minute. The only difference between these two signs is that hour is signed with a complete circle around an imaginary watch on your wrist whereas a minute is only henghua guoqu 橫畫過去 (just one small horizontal line). So, once again the problem is that for native speakers the image is either too small or the interpreter signs too fast. If I look at it myself after turning off the volume I do not understand what he is signing, but if I compare it with the speech, then I can understand better.

So, it is not unusual for a deaf person to misunderstand minute with hour, you have to imagine that they have no oral connection to the speech. You might say that by logic one could assume that in a televised debate you do not have one hour to reply, but you might be surprised to find out that their logic is often different from ours.

There is also one last mistake: 發生在美國 (HAPPEN IN AMERICA)/恨(HATE)/

自己(SELF) is not what the interpreter actually signs. He signs 討論(DISCUSS)/美

國(AMERICA)/ 敵對(ENEMY).

The interpreter proceeded by telling me that in the past few years, he has 137 interviewed many native signers, and some of them have told him that they do not actually understand TV news sign languages interpreting very well because some of the interpreters’ do not use natural sign’s grammar. They tend to use the same syntax and or grammatical structure as the one in the oral language they are translating from.

A major critique to this interpreted version is that sign language is a communicative principle, direct, concise and simple. And also, throughout the whole speech he has no expression whatsoever on his face. However, according to the interpreter, most of the times this types of interpreters cannot have expressions on their face because their clients, in this case, the TV station, do not want them to do that. They even have written rules about it because they think that the interpreter has to resemble the host or the anchor, in the case of TV news, as much as possible. So, the fact that he has no expression cannot be accredited to him. Finally, the only problem that he could pinpoint in his interpretation is that he tends to translate too literally and not follow a more natural sign language word order, other than that he seems to be doing fine.

From this brief report, we can see the importance of some evaluation parameters which appear both in the comment of the deaf participant and in the more detailed analysis of the hearing interpreter.

First of all, the significance of facial expressions cannot be denied. Although the

Deaf participant that I interviewed stated the following:

Q: Visual expressions are a fundamental grammatical component in any sign language, and also in TSL, but apparently in the video that we just saw the interpreter did not have any visual expression at all. Do you have any opinion or remark about that?

A: Probably it is because you are used to signed languages of other countries. In TSL it

138 seems to me that visual expressions can also be found, but not necessarily, some people might have them while others might not. Maybe, it is a cultural difference.

5.2.2 Quantitative pilot study

The quantitative study herein presented was originally intended as a reduplication of

Gile’s ‘tightrope hypothesis’, crucial in explaining the high frequency of errors and omissions that can be observed in interpreting even when no particular technical or other difficulties can be identified in the source speech (Gile 1989). The results did not seem to prove convincingly the original hypothesis, as will be explained later, but inconfutably proved the intrinsic difficulty of sign language interpreting.

5.2.2.1 Participants

Ten interpreters participated in the experiment: five sign language interpreters and five oral interpreters,whose age ranged from twenty-five to fifty. All of the participants were licensed interpreters with at least three years of experience. The samples of interpreters and Deaf people used was in line with the principle according to which in qualitative research smaller but focused samples are more often needed than large samples (Denzin and Lincoln 2005). At the same time, quantitative methods were used to seek empirical support for such research hypotheses.

5.2.2.2 Materials

The participants had to interpret a short excerpt from a question and answer session,

139 taken from a conference on telecommunications. The speech fragment was chosen by myself and its feasibility confirmed by interpreter C according to whom the speech did not contain any technical lexeme and was highly feasible because it dealt with daily used objects (like cell-phones and electronic reading equipments). In other words, it is an extract of a general nature and requires no previous knowledge of the subject. It does not contain any technical term as confirmed by Interpreter C.

The only expected difficulty was how to transfer some fairly new concepts like

“portable music” or “electronic reading” for which there is no given sign in TSL.

5.2.2.3 Tasks

The interpreters were asked to listen to the first question and answer, but to interpret only the answer. In this way, they had enough time to get familiar with the topic and get ready for the interpretation task. The video was taken from a link on Youtube.

The five sign language interpreters were videotaped with a digital camera (in one case the interpreter was taped with an Iphone because the cam recorder was not available). The participants had to interpret twice consecutively, after which the video was uploaded on my computer and the interpreted version transcribed together with the interpreters themselves. The same operation was repeated with the oral interpreters as well.

In this sample of ten professionals interpreting the same source speech in the simultaneous mode (in two different modalities), errors and omission (e/o’s) were found to affect different source-speech segments, just like in the study carried out by

Gile (1989). In the second repeat performance, there were some new e/o’s previously absent in the first version. The extract used in this experiment and the transcription of all interpreted versions can be found in the appendix II.

140 The interpreted version, both signed and oral, were transcribed and scanned for errors and omissions. As Gile (1989) himself duly pointed out, this method is not without pitfalls, both because of high inter-rater variability in the perception of what is and what is not an error or omission. Therefore, only flagrant errors or omissions were included in the analysis and to further opinions from other certified interpreters were requested to confirm the e/o’s herein identified were considere e/o’s by them as well.

5.2.2.4 Results

List of e/o’s:

(1) 電子閲讀 (diànzǐ yuèdú, electronic reading)

Subject A: “IPad 讀”. Type of e/o: error.

Subject B: “閲讀”. Type of e/o: omission.

Subject C: “電子書”. Type of e/o: error.

Subject E: “閲讀”。 Type of e/o: omission. Corrected in the subject’s

second version.

(2) 手機支付 (shǒujī zhīfù, mobile payment)

Subject A: “手機”. Type of e/o: omission.

Subject B: “手機”. Type of e/o: omission.

Subject E: “手機”. Type of e/o: omission.

Subject H: type of e/o: omission. Corrected in the subject’s second

version.

Subject I: “online payment”. Type of e/o: error. Corrected in the subject’s

second version

141

(3) 固定資產 (gǔdìng zīchǎn, fixed assets)

Subject B: “投資”. Type of e/o: omission.

Subject E: “股票”. Type of e/o: omission. Corrected in the subject’s

second version

Subject I : type of e/o: omission. Corrected in the subject’s second version.

(4) 移動音樂 (yídòng yīnyuè, portable music)

Subject A: type of e/o: omission.

Subject B: “音樂”. Type of e/o: omission.

Subject C: “音樂”. Type of e/o: omission.

Subject D: “音樂”. Type of e/o: omission.

Subject E: type of e/o: omission.

Subject G: “music”. Type of e/o: omission. Corrected in the subject’s

second version.

(5) 企業的一種行爲 (qìyè de yīzhǒng xíngwéi, business-like behavior)

Subject B: “消費各式各樣行爲”. Type of e/o: error. Corrected in the subject’s second version.

New e/o’s in the second version

The following is a list of e/o’s found in the second version of the target speech

(both signed and oral) whereas the relevant speech segments had been correctly interpreted in the first version.

(6) 電子閲讀 (diànzǐ yuèdú, electronic reading)

Subject I: omission. Corrected in the subject’s first version.

(7) 手機支付 (shǒujī zhīfù, mobile payment)

Subject L: omission. Corrected in the subject’s first version.

142 (8) 固定資產 (gǔdìng zīchǎn, fixed assets)

Subject L: omission. Corrected in the subject’s first version.

(9) 移動音樂 (yídòng yīnyuè, portable music)

Subject L: omission. Corrected in the subject’s first version.

(10) 消費者的使用 (xiāofèizhe de shǐyòng, consumers’ uses)

Subject I: “consumers’ needs”. Type of e/o: error.

(11) 企業的一種行爲 (qìyè de yīzhǒng xíngwéi, business-like behavior)

Subject I: “risky attitude”. Type of e/o: error.

Subject H: “it is a business.” Type of e/o: error.

Quantitative analysis

Subject A B C D E F G H I L e/o’s in

Source-speech 1st segment rendition

電子閲讀 1-1 1-1 1-1 0-0 1-0 0-0 0-0 0-0 0-1 0-0 4

手機支付 1-1 1-1 0-0 0-0 1-1 0-0 0-0 1-0 1-0 0-1 5

固定資產 0-0 1-1 0-0 0-0 1-0 0-0 0-0 0-0 1-0 0-1 3

移動音樂 1-1 1-1 1-1 1-1 1-1 0-0 1-0 0-0 0-0 0-1 6

企業的一種行爲 0-0 1-0 0-0 0-0 0-0 0-0 0-0 0-1 0-1 0-0 1

消 費 者 的 使 用 0-0 0-0 0-0 0-0 0-0 0-0 0-0 0-0 0-1 0-0 0

( second version only)

Total e/o’s in 1st 3-3 5-4 2-2 1-1 4-2 0-0 1-0 1-1 2-3 0-3 and 2nd rendition

Number of “new” 0 0 0 0 0 0 0 1 3 3

143

e/o’s (in 2nd

rendition only)

Table 3: Errors and omissions in the first and second renditions.

(0: correct; 1: error or omission. The participants F, G, H, I, L are oral

interpreters and the others signed).

Table 3 summarizes the quantitative aspects of the analysis. It is interesting to see the different type of errors or omissions that are made according to the interpreting modality. However, as the interpreters confirmed themselves the errors or omissions were not due to an intrinsic difficulty within the text, but to the difficulty of rendering easy expressions in a different modality within time constrictions, which strengthens the tightrope hypothesis as will be further explained in the next section. Furthermore, no major difference has been found in terms of the efforts directly related to the modality of interpreting.

In other words, according to our experiment, it does not seem that oral interpreters make more efforts while interpreting or vice versa, the concomitant efforts act differently on different interpreters but as a matter of fact, as the results of this study prove, signed language interpreters make even more efforts in rendering certain expression in a different target culture and language as shown by the bigger number of errors which is not due to their being less professionals (as previously mentioned all participants are certified interpreters). It is simply due to time restrictions and other concomitant factors as illustrated by the tightrope hypothesis and further clarified in the next paragraph.

5.2.2.5 Discussion

144

The aim of this study was to establish in a sample of certified interpreters interpreting a speech the presence of errors and omissions affecting segments that bear no intrinsic difficulty, as confirmed by Interpreter C, the one with the most years of experience. If there are, they can be explained in terms of processing capacity deficits such as predicted by Gile’s Effort Model.

The “tightrope hypothesis” is crucial in explaining the high frequency of errors and omissions that can be observed in interpreting even when no particular technical or other difficulties can be identified in the source speech (1989).

The findings of the present study strengthen the Effort Models’ “tightrope hypothesis” that many e/o’s are due not to the intrinsic difficulty of the correspondibg source-speech segments, but to the interpreters working close to processing capacity saturation which in Gile’s (1989) words “makes them vulnerable to even small variations in the available processing capacity for each interpreting component”.

Another interesting aspect that emerged from this study is the higher difficulty to render certain expressions in sign language in the simultaneous mode because of the intrinsic explanatory need. For example, according to one of my sources the expression dianzi yuedu (電子閲讀, electronic reading) which has an immediate translation equivalent in English, namely electronic reading, has to be rendered in signed language by way of a paraphrase: i.e. zhihui xing/shouji/shualai/shuaqu/de angzi/ + jian 指揮型/ 手機/ 刷來/ 刷去/ 的樣 子 / + 見 / ( 閲讀 的 動 作 )

(smartphone/cell/brush back and forth/+++ to see). With the time restrictions caused by the simultaneous mode it appears that certain purportedly simple or everyday concepts are not that immediate in signed language, which is why certain expressions like electronic reading or portable music have a higher error rate in the 145 signed versions than in the oral ones. Also, other difficulties that emerged from my discussion with the participants were certain lexems like xiaofei (消費, consumers) which has to be rendered with different signs in different contexts. Sometimes it is

BUY at other times it is USE +++.

Moreover, the concept of portable music is hard to convey because music is an abstract concept, so to make sure that Deaf people understand what the speaker is talking about the signer should add the sign for DOWNLOAD to make it a concrete object. Another interesting example was mobile payment. According to Interpreter D, if that concept is signed CELL + PAY, some older Deaf person might interpret it as purchasing a cellphone. Once again, the concept should be explained in signed language by way of the following periphrasis for example: shouji/shuru/chuanda/yinhang/daiti/fuqian (手機/輸入/傳達/銀行/代替/付錢//).

We must say though that, as we can say from table 3, most of the new errors in the second version were committed by the oral interpreters. This was an interesting phenomen which deserved to be further explored and I did so by interviewing the oral interpreters and ask them the reason why they had made more errors in the second version, because otherwise this could also be interpreted as more neurobiological efforts on the part of oral interpreters. However, all the participants who made more mistakes in the second version unanimously told me that the reason why this happened was because after hearing the speech once, they thought they were ready to embellish or enhance it with better expressions. This process took time away from the normal flow of interpreting. In other words, they confirmed themselves that this was not due to an intrinsic difficulty of the text, to biased materials, or let alone a greater effort compared to their signing colleagues.

5.3 Concluding remarks

146

In spite of what has been said so far, we must point out that actually some differences do exist between the networks supporting signed and spoken languages.

Some reflect differences in the early stages of sensory processing (MacSweeney et al. 2002; Sakai 2005), whereas others are likely to reflect higher level language differences made possible by the modality of communication.

A part of this chapter was dedicated to a review of literature and the purpose of this review was to focus on the great overlap in the neural organization for signed languages and spoken languages neurobiological processing, because our aim was to focus on the neurobiological studies which have irrefutably proven the fact that sign languages are natural languages at all effects and not a human construct.

So far we have focused on many some aspects of TSL interpreting, such as TSL interpreting history and challenging areas in TSL interpreting. In the next chapter we will focus on issues of assessment and evaluation in TSL interpreting. All the different peculiarities thus far analyzed have to be taken into consideration in the evaluation process.

As a consequence, this should be reflected in the profession best practices: the rules and regulations which govern the interpreting profession and guide its policies.

The initial part of this chapter aimed at providing the reader with a short excursus on some of the main studies concerning neurolinguistics research in SI.

The PET study I briefly reviewed identified brain correlates of SI, namely the fact that the left dorsolateral frontal cortex is implicated in lexical search verbal working memory and in semantic analysis tasks. On the other hand, the ERP studies investigated the time course of semantic processing, the mechanisms of switching control in interpreters and training-induced plasticity in language processing.

In future studies, some of the possible research questions that scholars might 147 want to focus their attention on are:

(a) Why is scientific research important in the field of SI and SI pedagogy?

(b) What are the main challenges, in terms of equipment, that scholars face in

neurolinguistics research when it comes to SI and most especially to SI

production analysis?

(c) How could researchers overcome these technical difficulties?

(d) Finally, what might be the differences between novice interpreters and expert

interpreters in terms of their brain functions and cognitive structures?

(e) And how to apply these results to sign language interpreting?

As previously mentioned, future research is warranted to develop longitudinal studies which might possibly focus on the development of expertise in interpreting, thus shedding further light on the brain plasticity of interpreters.

In the experimental part, a behavioral study was carried out to prove the tightrope hypothesis. If all subjects in the sample failed to reproduce adequately the same ideas or pieces of information, this would suggest the existence of an intrinsic

‘interpreting difficulty’ of the relevant segments (too specialized, poorly pronounced, delivered too rapidly, too difficult to render in the target language, etc.) Another indication could come from an exercise in which each subject is asked to interpret the same speech twice in a row. Having become familiar with the source speech during their first interpretation, subjects can be expected to correct in their second version many e/o’s committed in their first version. If, notwithstanding this general improvement of interpreting performance from the first to the second target-language version, it were possible to find new e/o’s in the second version whereas the same speech segments were interpreted correctly the first time, this would be an even stronger indication that processing capacity deficits are involved.

The findings of this study have strengthened the case for the tightrope hypothesis

148 and thus give some support to the Effort Models as a conceptual tool to explain not only oral interpreters’ cognitive-constraints-based limitations but also TSL interpreters, and in Gile's words may give some credibility to the idea that the usefulness of a concept or model in scientific exploration is not necessarily a function of its degree of sophistication. As previously mentioned, it must be said that according to the results, hearing interpreters seem to make more errors in the new version than signed interpreters. I dedided to further examine this phenomenon by orally interviewing the hearing interpreters who, almost unanimously agreed on the fact, that the new errors were due to their trying to improve the second rendition, to their being more demanding and it had nothing to do with the intrinsic difficulty of the text.

In this chapter, I have also reviewed a recent study by Emmorey and

McCullough (2009) focused on the bimodal bilingual brain. Bimodal bilinguals are hearing individuals who know both a signed and a spoken language. The fMRI results from this study reveal separate effects of sign language and spoken language experience on activation patterns within the superior temporal sulcus. In addition, the strong left-lateralized activation for facial expression recognition previously observed for deaf signers was not observed for hearing signers. Therefore, the authors conclude that both sign language experience and deafness can affect the neural organization for recognizing facial expressions, and argue that bimodal bilinguals provide a unique window into the neurocognitive changes that occur with the acquisition of two languages.

The chapter on challenging areas of TSL interpreting aimed at proving that the efforts underlying sign language interpreting are at the basis of the necessity of turn-shifting on stage while interpreting at a sign language event and is related, on a more general basis, to the different aspects of TSL interpreting that are being 149 analyzed in the present dissertation and which contribute to the existence of neurobiological efforts underlying the signed interpreting task as much as the oral one.

Also, the results of the experiment on interpreters’ errors and omissions is interesting both for interpreters to reflect on their task and outsiders to underline the fact that sign language interpreting is not easy as most people believe. The hardships and difficulties that the interpreters face are mainly due to the modality of their task, thus the best thing for interpreters to do is twofold: have a linguistic background to be able to create new words in new fields according to the word-formation rules that

Deaf people are accustomed to and always request the speaker(s) to provide the content of their speech to provide a professionally impeccable service.

In the next chapter, I will attempt at proposing how TSL interpreting should be assessed and evaluated, based on interpreting challenges, the experiments and the other reflections

150 CHAPTER SIX Assessment and evaluation in TSL Interpreting

6.1 Introduction

In the last chapter, the neurobiological studies which are a proof of the fact that sign languages are indeed languages and that you cannot define the nature of a language based on its modality.

The focus, therefore, was on the great overlap in the neural organization for signed languages and spoken languages neurobiological processing and on the fact that one should be careful with linguistic definitions because this does not mean that sign languages are a visual rendition or a spatial representation of an oral language, indeed they have complex grammars and syntactic rules of their own, and can be used to discuss any topic. This is due to the fact that sign languages are as rich and complex as any oral language, despite the common misconception that they are not

"real languages".

Professional linguists have studied many different sign languages and found that they exhibit the fundamental properties that exist in all oral languages (Klima and

Bellugi 1989; Sandler and Lillo-Martin 2006).

This means that signed and spoken languages share similar linguistic rules. The visual-spatial mode enables signed languages to make use of spatial locations, the motion of the hands, and the configuration of the hands (handshapes) to encode linguistics information, demonstrating the language’s phonology (Stokoe 1960).

The mouth articulator in spoken languages and hands in signed languages both activate Broca’s area, thus there does not seem to be any obvious difference between

151 the two modally-different languages in this area.

As previously mentioned, the neurological processing of sign language and the processing of action/gesture is different, and this proves that sign language is not made up of physical gestures and pantomime.

As we read in Corina et al. (1992) and Marshall et al. (2004), deaf aphasics with left hemisphere lesions had difficulty comprehending and producing signed languages but not pantomime.

In an fMRI study carried out by MacSweeney et al. (2004), participants were shown (a natural sign language) and a conventionalized gesture system which shared similar manual movements but did not form a linguistics system. For signers, the signed language showed a left hemisphere lateralized pattern, activating the left posterior perisylvian cortex and the left posterior superior temporal gyrus. These studies underline the fact that the dissociation of sign languages and gestures suggests that signed languages share some of the properties of natural languages.

In conclusion, although modality does affect language processing in some respects, the language system of signed languages displays many of the same characteristics of spoken languages.

Given the current lack of appropriate or commonly shared assessment and evaluation tools of Taiwan Sign Language (TSL) interpreting, this chapter explores the possibility of establishing common parameters which can be used in three different situations, namely the classroom where students are learning TSL interpreting, real situational contexts as a tool to evaluate professional interpreting and also in a pedagogical setting, because sign language interpreting is mainly used as an educational support system for deaf children in schools and universities.

In other words, we are trying to explore the theoretical possibility of establishing

152 a model delineating the tools that could evaluate TSL adult interpreters in real situational contexts, assess the proficiency of TSL interpreting students’ performances and assess the proficiency of educational interpreters by developing tools suitable for TSL based on the American Educational Interpreter Performance

Assessment (EIPA), on Taiwan’s current assessment parameters and on the precious help of a series of deaf and hearing signers and sign language interpreters, without whom this study couldn’t have been possible.

The first part of this chapter will focus on theoretical issues such as the distinction of assessment and evaluation in the literature. Then, we will provide some background information on the issue of quality in interpreting and finally will focus on the illustration and analysis of a tentative model in the assessment of Taiwan Sign

Language Interpreting (TSLI), after briefly summing up some of the main characteristics of TSL for the benefit of the reader.

A sign language is a language which transmits information via sign patterns, thus by using a different channel, simultaneously combining hand shapes, orientation and movement of the hands, arms or body, and facial expressions to fluidly express a speaker's thoughts. The very first linguist who studied signed languages by giving them their called-for status of “languages” was , according to whom wherever communities of Deaf people exist, sign languages naturally develop and their complex spatial grammars are markedly different from the grammars of spoken languages (Stokoe 1960; 1976). Stokoe’s work was ground-breaking because

“almost everyone, hearing and deaf alike, at first regarded Stokoe’s notions as absurd or heretical; his books when they came out as worthless or nonsensical [as is] often the way with the work of genius.” (Sacks 1989:63).

153

Nowadays, most scholars accept the fact that “Sign35 is natural to all who learn it

(as a primary language) and has an intrinsic beauty and excellence sometimes superior to speech” (Sacks 1989:29). It is “seen as fully comparable to speech (in terms of its phonology, its temporal aspects, its streams and sequences), but with a unique, additional power of a spatial and cinematic sort – at once a most complex and yet transparent expression and transformation of thought” (Sacks 1989:72).

It has been proved that Sign is a language even at the neurolinguistic level by

Bellugi and her team (Bellugi et al. 1991a, 1991b, 1992, 1993, 1997a, 1997b, 2001,

2010), and this aspect was discussed in detail in the third chapter. We have seen that the left hemisphere in signers ‘takes over’ a realm of visual-spatial perception, modifies it, sharpens it, in an unprecedented way, giving it a new, highly analytical and abstract character, making a visual language and visual conception possible

(Sacks 1989), which can be perceived as a proof of the plasticity of the brain.

In the last couple of decades, scholars around the world have increasingly been focusing their attention on signed languages, by analyzing their structure, their syntax, their semantics and also some of the strategies or difficulties underlying interpreting skills between oral and signed languages. Different countries have led to the discovery of different aspects because every country has a different sign language which has developed independently from the language of that country. For example, Taiwan Sign Language (TSL) is more structurally similar to Japanese Sign

Language (JSL) than to Chinese Sign Language (CSL). As previously mentioned, some scholars have devoted their attention to interpreting from and into signed

35 Sign, capital letter, is intended as a language with a different mode of expression. In the whole paper, capitalization is aimed at distinguishing a cultural concept. For example, whenever the word Deaf is capitalized, it means not only deaf as a biological condition but also all the cultural aspects that underlie it.

154 languages. In Italy, for example, a reality with which I am more familiar with, there have been many interesting studies researching different aspects of interpretation from and into Italian Sign Language (LIS), which in Italian is called Lingua Italiana dei Segni (Amorini, Cortazzi and Eletto 2000; Bove and Volterra 1984; Stocchero

1991; 1995; Cameracanna and Franchi 1997a, 1997b; Carli, Folchi and Zanchetti

2000; Cokely 2003; Del Vecchio and Franchi 1997; Franchi 1992; 1993; Gran and

Bidoli 2000; Sala 2005; Woll and Porcari Li Destri 1998, to name just a few). In

Taiwan much less has been done, especially on Taiwan Sign Language Interpreting

(TSLI) that, to the author’s best knowledge, is quite unexplored as an area of research. However, in American Sign Language (ASL) or Italian Sign Language

(LIS), which have been more thoroughly explored than TSL, interpreting scholars have been focusing mainly on interpreting in the educational sector. (Cokely 2005;

Davis 2005; Forestal 2005; Lee 2005; Marschark et al. 2005a, 2005b; Monikowski and Peterson 2005; Napier 2005; Quinto-Pozos 2005; Turner 2005; Winston 2005; to name just a few.) Furthermore, to the author’s best knowledge, so far no paper in

Taiwan has ever been written on the assessment and evaluation of TSL interpreting programs. Therefore, this chapter also aims at filling a gap in the literature by exploring the possibility of establishing common parameters which could be potentially used in the classroom, where students are learning TSL interpreting, in real situational contexts as a tool to evaluate professional interpreting and also in a pedagogical setting, because sign language interpreting is still mainly used as an educational support system for deaf children in schools and universities.

In other words, this chapter is aimed at investigating the theoretical possibility of establishing a model delineating the tools that could evaluate TSL adult interpreters in real situational contexts, assess the proficiency of TSL interpreting students’ performances and assess the proficiency of educational interpreters by developing 155 tools suitable for TSL based on the American Educational Interpreter Performance

Assessment (EIPA), on Taiwan’s current assessment parameters and on the precious help of a series of Deaf and hearing signers and sign language interpreters, without whom this study couldn’t have been possible.

Before moving to the illustration of these tools, the next paragraph will focus on the importance of assessment and evaluation criteria in interpreting programs and, by way of reviewing the relevant literature, will also clarify the difference between similar terms such as assessment and evaluation. For the benefit of the reader, some basic information on TSL will also be summed up.

6.2 Assessment and evaluation literature review

The need for adequate and commonly shared parameters for assessment and evaluation in signed languages interpreting is as pressing, if not more, as for oral languages. In the

West only three monographies have been written on the issue of translation and interpreting assessment, namely Reiss (1979), House (2001) and Williams (2004). Other papers have been written on different approaches to translation evaluation: corpus-based

(Bowker 2001), functional approach (Colina 2009; Moratto 2011b), teaching-oriented

(Li 2006), plan-based (Zhong 2005), amongst others.

However, to the author’s best knowledge, nothing has been written on TSL interpreting assessment and evaluation, which is the aim of the present paper. It seems opportune to give a scholarly definition of the difference between assessment and evaluation, and other related technical terms, before moving on to the next section.

Assessment, testing, measurement and evaluation are all somewhat different concepts, even if at first sight they might appear to be the same notions. Assessment is traditionally defined as “appraising or estimating the level or magnitude of some

156 attribute of a person” (Mousavi 2009:36, as cited in Brown and Abeywickrama 2010:3).

In other words, it is a constant judgment that the teacher or instructor makes of the student or trainee’s every feedback during the ongoing process of learning. Testing, on the other hand, is a way to judge a person’s competence, and not the person as such, during a given moment in time, thus measuring the student’s performance.

Measurement is the process of quantifying the observed performance of classroom learners (Brown and Abeywickrama 2010); evaluation refers to the overall language or interpreting program. Therefore, in our approach both assessment as a way to monitor

TSL interpreting trainees’ progressive development and evaluation to monitor the results obtained at the end of the course are necessarily taken into consideration in the elaboration of our model for Taiwan Sign Language Interpreting Assessment and

Evaluation (TSLIAE) tools.

In the present chapter, we are attempting to explore the theoretical possibility of establishing a model delineating the tools that could evaluate TSL adult interpreters in real situational contexts, assess the proficiency of TSL interpreting students’ performances and assess the proficiency of educational interpreters by developing tools suitable for TSL based on the American Educational Interpreter Performance

Assessment (EIPA) and on Taiwan’s current assessment parameters. We are trying to develop a holistic set of parameters which could be applied in any context for the assessment of TSL interpreting. Prior to focusing on the tentative exploration of applying such an approach to task performance assessment and evaluation, it seems appropriate to say a few words about the issue of quality interpreting as reviewed in the traditional literature, which will be done in the next paragraph.

157

6.3 The issue of interpreting quality

Interpreting quality may be assessed from different points of view. In the literature, we read that usually, it should begin with “customer needs and end with customer perception” (Kotler and Armstrong 1994: 568). Reflecting on interpreting quality may enable professionals to carry out a satisfactory service and at the same time enable researchers to develop increasingly efficient training methods. According to Dejean le

Féal (1990: 155; as cited in Kurz 2001: 395) “what our listeners receive through their earphones should produce the same effect on them as the original speech does on the speaker’s audience. It should have the same cognitive content and be presented with equal clarity and precision with [the] same type of language.”

In interpreting studies relevant literature, there have been a plethora of empirical studies (Andres 2000; Buehler 1986; Collados Ais 1998; Gile 1990; Kurz 1989, 1993,

1994, 1996; Kopczynsky 1994; Meak 1990; Marrone 1993; Mack and Cattaruzza 1995;

Ng 1992; Vuorikoski 1993, 1998; Moser 1995, 1996; to name just a few) differing in terms of method, scope and language combinations; however, few of them (Ng 1992) have focused on interpreting performance quality needed in training future interpreters and none on signed languages interpreting performance assessment. In other words, most of these studies have focused on either user-oriented or colleague-oriented definition of quality assessments, overlooking a cross-cultural delicate issue, i.e. the importance of setting different quality assessment parameters for different language combinations and for mixed classes with students having different language and cultural backgrounds, and for languages which are expressed in different modalities, like signed languages.

Seleskovitch (1986) points out that interpretation should always be judged from the perspective of the listener and never as an end in itself, “the chain of communication

158 does not end in the booth” (Kurz 2001:395). Hence, when we are talking about interpretation quality inside the classroom, we are talking about the perspective of the teacher/trainer-listener who is well aware of the problems to be found in a cross-culturally mixed classes in which some of the participants find themselves interpreting from a foreign language into a foreign language or do not even share the others’ working languages. The teacher-evaluator should take these aspects into consideration when assessing, for instance, the performance of non-native speaker students or non-native signers.

Within the Skopostheorie theoretical framework (Moratto 2011), the trainer should simply make sure that the message is understandable, adequate to the skopos. Voice quality should be an element of focus without being the main assessment parameter especially in oral to signed languages interpreting where voice is a null parameter. Some empirical studies have proven that pleasant voices are perceived as being an important or very important factor (Chiaro and Nocella 2004); however trainees should not be discouraged if they happen to have a not particularly pleasant voice because interpretation efficiency is based also on other factors. Fluency is an important element which trainers should work on along with sense consistency, logical cohesion, completeness and text accuracy. Terminology is important especially in the more advanced stages of training. Grammar is also important, but it is something trainees should work on in a language course or separately. Interpretation trainers ought not to waste valuable exercise time on explaining grammar rules which should be taken for granted at the language level required for starting an interpreter career.

According to Nord36 in translator and interpreter training “deviations from target language norms […] are very often […] not errors but caused by insufficient

36 Personal communication with Christine NORD in Taipei on November 8th, 2010. 159 proficiency in the TL (even in the mother tongue). These mistakes should be marked separately because they require special language training.” In other words, trainees must follow other linguistic and language-related enhancing courses, which are implemented in the curriculum to fill a linguistic gap that some students might still have.

The cross-cultural grading policy along with the evaluation of interpreting quality have inexorably changed with the increase of deaf students (as is the case in Europe) enrolling in interpretation curricula whose special needs have increased the intra- and inter-cultural reflections and research interests of trainers and trainees alike in the field of signed languages. Traditionally (Buehler 1986; Kurz 1989; 1993; 1994; 1996), interpreters’ quality-criteria assessment parameters may be summed up in the following: accent, voice, fluency, logical cohesion, sense consistency, completeness, grammar and terminology. Different users’ expectations would emphasize different parameters.

Empirical studies provide us with user expectation profiles, “information which will prove beneficial to both the exercise and the teaching of the profession” (Kurz 2001:

407).

Unlike other researchers’ who define quality as “user satisfaction” (Kurz 2001: 407), here it is defined as “text accuracy”, in an attempt to find a commonly acceptable and viable evaluation grid to be used by trainers in cross-cultural learning environments. In a cross-cultural perspective, the most important parameter remains text accuracy and sense consistency which are also the main evaluation parameters (along with function and skopos of the interpretation as instructed in the “brief”) used in the entrance and final exams for the assessment of students’ performances, especially those whose native language or culture of origin gives them a feeling of cultural-linguistic alienation compared to their fellow students or colleagues. The same applies to Deaf students who feel alienated, if not inferior at times, compared to hearing students. In the next

160 paragraph, we will try to elaborate a model which takes all these different factors into consideration.

Within the framework of the Skopostheorie, a source text (ST) may allow any translation purpose, depending on the translation brief. Hence, it is essential for trainee interpreters to be very clear on what the brief is prior to carrying out their interpretation.

However, the acceptability of translation purposes is limited by the translator’s and interpreter’s responsibility with regard to his or her partners in the co-operational activity of translation (of both written and oral text). This is in line with the principle of loyalty as illustrated by Nord (1989), where loyalty is perceived as an interpersonal category. “Loyalty” is a key concept in Nord’s theory. It basically means that “the target-text purpose should be compatible with the original author’s intentions […], however it can be difficult to elicit the sender’s intentions in cases where we don’t have enough information about the original situation” (Nord 1997:125-126). In other words,

“loyalty refers to the interpersonal relationship between the translator, the source-text sender, the target-text addressees and the initiator. Loyalty [also] limits the range of justifiable target-text functions for one particular source text and raises the need for a negotiation of the translation [or interpretation] assignment between translators [or interpreters] and their clients” (Nord 1997: 126).

“The loyalty principle takes account of the legitimate interests of the three parties involved:

initiators (who want a particular type of translation), target readers (who expect a particular

relationship between original and target texts) and original authors (who have a right to demand

respect for their individual intentions and expect a particular kind of relationship between their text

and its translation). If there is any conflict between the interests of the three partners of the translator,

161

it is the translator [interpreter] who has to mediate and, where necessary, seek the understanding of

all sides”. (Nord 1997: 128)37

Moreover, as previously mentioned, the translation purpose is defined by the translation brief which, implicitly or explicitly, describes the situation for which the target text (TT) is needed. Needless to say, this situation may be real or fictitious as in a classroom setting. However, notwithstanding its fictitious nature, trainers should all the same make it very clear what the skopos of the interpretation of trainees is so as to give them a clear-cut idea on what the function or hierarchy of functions expected or intended to be achieved by the TT should be.

These are the basic principles of functional translational activity, where every task should have a briefing, which should, in turn, become the only benchmark for trainee interpreters’ task performance assessment in the classroom setting. An important aspect which deserves to be mentioned before the analysis of a model specific to signed languages interpreting assessment is that, as mentioned before, in the approach adopted herein the brief should be considered as a standard for TI’s performance assessment.

However, this does not mean that “text accuracy” and “sense consistency” are no longer valid parameters to be applied in the assessment procedure. The only difference with traditional approaches is that the aforementioned two parameters should have the brief as their benchmark and not the text, per se, as an independent unit. In other words, the (oral) text produced by the students should be accurate and consistent with the

übersetzungsauftrag given by the instructor before a given task. Therefore, we can state

37 Borrowing Nord’s words, “loyalty” may also be defined as “the responsibility translators [interpreters] have toward their partners in translational interaction. Loyalty commits the translator bilaterally to the source and target sides, taking account of the difference between culture-specific concepts of translation prevailing in the two cultures involved” (Nord 1997:140).

162 that “text accuracy” and “sense consistency” are still the most important parameters in the light of the übersetzungsauftrag, in a functional approach.

According to professional sign language interpreter Ginger Hsu (personal communication, 2012), most sign language interpreting organizations, every time they provide sign language interpreting services, they distribute a questionnaire amongst

Deaf people38, in order to inquire on their degree of satisfaction.

FIG 939

38 It depends on whether the individual deaf person is served or the deaf people are served. If only one deaf person is served, only one person needs to do the questionnaire. 39 The English translation is in the appendix. 163

164 In other words, in terms of traditional literature they are trying to analyze the issue of interpreting quality from the users’ end, i.e. the Deaf community. At present, thanks to the help of professional and socially active interpreter Ginger Hsu, who collaborates with quite a few similar organizations, I am trying to collect these questionnaires so as to see the way they approach the issue. The results will be discussed in a separate paper.

The afore-mentioned observations are derived from traditional literature on assessment, which has never focused on the development of commonly shared parameters for the assessment and evaluation of signed languages. This will be tentatively done in the next section.

6.4 Taiwan Sign Language Interpreting Assessment and Evaluation (TSLIAE), with an emphasis on the naturality issue

Before moving on to the illustration of a Taiwan Sign Language Interpreting

Assessment and Evaluation (TSLIAE) model, it seems appropriate for the benefit of the reader, to briefly sum up the main characteristics of TSL as illustrated thus far..

TSL is the language used amongst Deaf communities in Taiwan. As previously mentioned, the origins of TSL developed from Japanese Sign Language during Japanese rule, that is why TSL is considered part of the Japanese Sign and has no relations with Chinese Sign Language (CSL). TSL has some mutual intelligibility with both Japanese Sign Language (JSL) and Korean Sign Language (KSL); it has about a 60% lexical similarity with JSL (Fischer et al. 2010). For detailed descriptive information on TSL, the reader can refer to the relevant literature40.

40 Ann et al. (to appear), 2000, 2007; Brentari 2010; Chan and Wang, 2009; Chang 2009; Chang et al. 2005; Chang and Ke 2009; Chen and Tai 2009, 2009a, 2009b; Chiu et al. 2005; Duncan 2005; 165

Nowadays, TSLIAE still largely relies on intuitive judgments on the part of instructors. Therefore it seems opportune to try and systematize the assessment systems within TSL interpreting training, because “[interpreting] trainers […] are faced with the task not only of enabling their trainees to acquire both the generic and the specific competences required for professional [interpreting], but also of providing their graduates with the adequate tools to ensure that they are capable of maintaining and upgrading their competences throughout their professional working lives” (Way 2008:

89). The tools that we propose for the assessment and evaluation of TSL interpreting in different contexts, namely TSL adult interpreters in real situational contexts, TSL interpreting students’ performances and educational interpreters, are based on the

Educational Interpreter Performance Assessment (EIPA) and on Taiwan’s current assessment parameters. Therefore, before moving to the explanation of our own model, the afore-mentioned EIPA will be briefly illustrated.

6.4.1. EIPA

The Educational Interpreter Performance Assessment (EIPA) is a tool, which was set in

1999, designed to evaluate the voice-to-sign and sign-to-voice interpreting skills of interpreters who work in the elementary and secondary school classroom setting. The

EIPA evaluates the ability to expressively interpret classroom content and discourse and the ability to receptively interpret student or teen sign language. EIPA is used to evaluate interpreters who work with students and teenagers who use predominately

Huteson 2003; Jean 2005; Lee et al. 2001; Myers et al. 2005, 2006; Myers and Tsay 2004; Myers and Tai 2005; Sasaki 2007; Shih and Ting 1999; Smith 2005; Su and Tai 2007, 2006, 2009; Tai 2005, 2006, 2007, 2008; Tai and Tsay 2009, 2010; Tai and Chen 2010; Tsai and Myers 2009; Tsay 2007, 2010; Wilbur 1987; Zhang 2007.

166 American Sign Language (ASL), Manually-Coded English (MCE) and Pidgin Sign

English (PSE), but as will be shown it can be extended to other sign languages as well.

Manually-Coded English (MCE) differs from American Sign Language (ASL) insofar as it is a variety of visual communication methods expressed through the hands which attempt to represent the English language. However, unlike Deaf sign languages which have evolved naturally in Deaf communities all around the world, the different forms of MCE were artificially created, and generally follow the grammar of English and not the natural grammar, with its own syntax, of signed languages. So, the reason it is called manually-coded is because it tends to be a linear and purely system, not to be confused with a language. I will not focus on the major role that it has played and it continues to play in education, which is still the most common setting where is used (the reader can refer tochapters one to three of the present thesis where it is analyzed in detail. This method is not only used with Deaf students, but also children with other kinds of speech or language difficulties. On the other hand, Pidgin Sign English (PSE) is a mixture of the two: ASL and MCE. In EIPA rating system, the evaluation focuses on several domains, like

(a) grammatical skills: use of prosody (or intonation), grammar, and space;

(b) sign-to-voice interpreting skills: ability to understand and convey child/teen sign

language;

(c) vocabulary: ability to use a wide range of vocabulary, accurate use of finger spelling

and numbers;

(d) overall abilities: ability to represent a sense of the entire message, use appropriate

discourse structures, and represent who is speaking.

167

In the EIPA system evaluators use a Likert Scale to assess specific skills. Scores for each skill range from 0 (no skills demonstrated) to 5 (advanced native-like skills). The scores from all three evaluators are averaged for each skill area, each domain, as well as the overall test score. An individual’s EIPA score is the summary total score. For example, an interpreter could report his/her score as EIPA Secondary PSE 4.2, which represents the grade level, the language modality, and the total summary EIPA score.

On the basis of this system, we can elaborate a preliminary evaluation sheet for TSL interpreting assessment, based on the EIPA rating form41, which is reported in Table 2

Table 4 EIPA rating form

I. Interpreter Product - Voice-to-Sign

Prosodic Information:

A. Stress/emphasis for important words or phrases 0 1 2 3 4

5

B. Affect/emotions (interpreter appropriately uses face and body) 0 1 2 3 4 5

C. Register 0 1 2 3 4 5

D. Sentence boundaries (not run-on sentences) 0 1 2 3 4 5

Non-manual information:

E. Sentence types/clausal boundaries indicated 0 1 2 3 4 5

F. Production and use of non-manual adverbial/adj. markers 0 1 2 3 4 5

Use of signing space:

G. Use of verb directionality/pronominal system 0 1 2 3 4 5

41 The rating form can be found at http://www.classroominterpreting.org/eipa/performance/EIPARatingForm.pdf

168 H. Comparison/contrast, sequence and cause/effect 0 1 2 3 4 5

I. Location/relationship using ASL classifier system 0 1 2 3 4 5

Interpreter performance:

J. Follows grammar of ASL or PSE (if appropriate) 0 1 2 3 4 5

K. Use of English syntactic markers (if appropriate). 0 1 2 3 4 5

L. Clearly mouths speaker’s English (if appropriate) 0 1 2 3 4 5

II. Interpreter Product-Sign-to-Voice(i.e., fluency/pacing, clarity of speech, volume of speech)

Can read and convey signer’s:

A. Signs 0 1 2 3 4 5

B. Finger spelling42 and numbers 0 1 2 3 4 5

C. Register 0 1 2 3 4 5

D. Non-manual behaviours and ASL morphology and/or syntax 0 1 2 3 4 5

Vocal/Intonational features:

E. Speech production (rate, rhythm, fluency, volume) 0 1 2 3 4 5

F. Sentence/clausal boundaries indicated (not run-on speech) 0 1 2 3 4 5

G. Sentence types 0 1 2 3 4 5

H. Emphasize important words, phrases, affect/emotions 0 1 2 3 4 5

Word choice:

I. Correct English word selection 0 1 2 3 4 5

Interpreter performance:

J. Adds no extraneous words/sounds to message 0 1 2 3 4 5

III. Vocabulary

Signs:

42 Chinese characters depiction in TSL. 169

A. Amount of sign vocabulary 0 1 2 3 4 5

B. Signs made correctly 0 1 2 3 4 5

C. Fluency (rhythm and rate) 0 1 2 3 4 5

D. Vocabulary consistent with the sign language or system 0 1 2 3 4 5

E. Key vocabulary represented 0 1 2 3 4 5

Finger spelling7

F. Production of finger spelling and Chinese characters depiction 0 1 2 3 4 5

G. Spelled correctly 0 1 2 3 4 5

H. Appropriate use of finger spelling 0 1 2 3 4 5

I. Production of numbers 0 1 2 3 4 5

IV. Overall Factors

Message processing:

A. Appropriate eye contact/movement 0 1 2 3 4 5

B. Developed a sense of the whole message V-S 0 1 2 3 4 5

C. Developed a sense of the whole message S-V 0 1 2 3 4 5

D. Demonstrated process lag time appropriately V-S 0 1 2 3 4 5

E. Demonstrated process lag time appropriately S-V 0 1 2 3 4 5

Message clarity:

F. Follow principles of discourse mapping 0 1 2 3 4 5

Environment:

G. Indicates who is speaking 0 1 2 3 4 5

170 So, according to the scale reported in Table 4, TSL interpreting assessment could be divided into different levels, namely beginner, advanced beginner, intermediate, advanced intermediate, and advanced or professional level.

Starting from the last one, the professional or proficient level, is accredited to the student when s/he demonstrates broad and fluent use of Taiwan Sign Language (TSL) vocabulary, with a broad range of strategies for communicating new words and concepts, including figurative or iconic language coined by signers and which does not have an immediate equivalent in the target oral language. This is strictly related to the issue of metaphorical expressions in TSL, which will be analyzed in the seventh chapter. On the other hand, sign production errors are minimal and never interfere with comprehension.

Prosody is correct for grammatical, non-manual markers, and affective purposes.

Complex grammatical constructions are typically not a problem. Comprehension of sign messages is very good, communicating all details of the original message. This level allows the signers to be proficient in all the different contexts that are the object of our analysis in the present paper. From a pedagogical point of view, an individual at this level is capable of clearly and accurately conveying the majority of interactions within the classroom, though this point is kind of utopian in a way because it is not yet available in Taiwan.

From a professional, authentic situational context, the afore-mentioned criteria should be at the basis of the evaluation of the Chinese-TSL signer and at the level of students’ performance assessment teachers and/or interpreting trainers could base their evaluation upon these criteria.

The other levels are not professional, therefore can only be used to assess and later evaluate the level of a student. Professional sign interpreters, irrespective of whether they work in educational contexts or in conferences, should be assessed only on the

171 basis of the level just illustrated, which can also be used for sign language interpreting trainees’ final exam, prior to getting their interpreting license.

Students who have reached the advance-intermediate level should be able to use a broad TSL/Chinese vocabulary, even if the definition of broad remains to be operationalized, with sign production that is generally correct or that does not interfere with the passing of the message. They should also have good strategies for conveying information when a specific sign is not well established in their already acquired vocabulary. Syntactic constructions, both oral and signed, are generally clear and consistent, however unlike professional interpreters, complex information may still pose some problems for them. Prosody is good, with appropriate facial expression most of the time. Students at this high level may however still have difficulty with the use of facial expression in complex sentences, i.e. syntactically difficult signed structures, and adverbial non-manual markers. Fluency may deteriorate when rate or complexity of communication increases. When signing, they are able to use space consistently most of the time, although complex constructions or extended use of discourse cohesion may still pose problems for them. Passive understanding of the vast majority of signed messages, which are however not signed at full speed, is good but translation may lack some complexity and nuances intrinsically present in the original message. A student at this level does not have any problem in passive understanding, though s/he might have some difficulties in active production and especially in conveying complex topics or in handling rapid turn taking.

Moving down the scale, students having reached the intermediate level in TSL interpreting ability have a fairly good knowledge of basic TSL vocabulary, but will lack vocabulary for more technical, complex, or academic topics. Hearing trainees are able to sign in a fairly fluent manner using some consistent prosody, but pacing is still slow with infrequent pauses for vocabulary or complex structures. Sign production may show

172 some errors but generally will not interfere with communication. Grammatical production may still be incorrect, especially for complex structures, but is in general intact for everyday language. Hearing students are able to normally understand signed messages but may need repetition sometimes. Voiced translation often lacks depth and subtleties of the original message, in other words they are not able to grasp the semantic nuances intrinsic in some signed messages. An individual at this level should be able to communicate very basic classroom content, but may incorrectly interpret complex information resulting in a message that is not always clear and accurate.

Beginners are further divided into advanced beginners and pure beginners. They are different between each other in the amount of vocabulary at their disposal, insofar as advanced beginners have a basic amount of vocabulary whereas mere beginners possess a very limited repertoire.

In mere beginners, production is most of the times incomprehensible and TSL syntax is almost nonexistent and very close to Mandarin Chinese structure, i.e. the active production of students at this level resembles Manually-Coded Chinese (MCC) or

Pidgin-Signed English (PSE). Sign production lacks prosody and use of space for the vast majority of the interpreted message. An individual at this level is not recommended for interpreting, just like advanced beginners who often hesitate when signing, as if searching for the right word. They make many grammatical errors, although basic signed sentences appear intact. More complex syntactical structures seem to create many problems. Individual is able to read signs at the word level and simple sentence level but complete or complex sentences often require repetitions and repairs. Although advanced beginners are able to use signed prosody and space to a certain extent, its use seems to be inconsistent and often incorrect. The afore-mentioned levels have been structured according to the American EIPA. However, it seems opportune to contextualize this model and see if it meets the needs of the specific situational context 173 in Taiwan. Following, I will give some non-exhaustive43 information on the TSL interpreting exam in Taiwan.

First of all, the technical name of the exam for TSL interpreters is called Level C technician for sign and the certificate is issued by the Central

Region Office, Council of Labor Affairs.44 Its validity is limitless, which means that irrespective of how often the examinee is going to work as a sign language interpreter, once the certificate is issued or obtained, it is valid for the rest of the examinee’s life.

According to official sources45, from the very first year that this accreditation exam took place, i.e. 1974, up until 2009, 190 certificates have been distributed in Taiwan.

The candidate must be a citizen of the Republic of China, over fifteen years old or with a junior high school diploma. The exam is divided into three parts. The first part, named “sign language interpreter professional knowledge”, consists of testing the candidate’s knowledge on Deaf culture, the deontological norms and regulations that

TSL interpreters should abide by, related laws, and current affairs knowledge. This is a fundamental component in sign language interpreting, as confirmed by Ginger Hsu, one of the leading TSL interpreters in Taiwan. Unfortunately, though, as can be seen in

Table 4, Deaf culture knowledge is not one of the assessment parameters, at least in the

EIPA model. The second part of the exam is called “sign language interpreting for general public services” and it consists of sign language to and from oral language interpreting. In the last part of the exam, the candidate shall prove his or her ability to interpret bi-directionally. Exams are taken once a year in wintertime.

43 For an exhaustive presentation of the interpreting exams and its content, the reader can refer to http://ir.chna.edu.tw/bitstream/310902800/9160/2/177003%E8%A1%93%E7%A7%91.pdf (only in Chinese). 44 http://www.labor.gov.tw/ 45 www.104learn.com.tw/cfdocs/edu/certify/certify.cfm?cert_no=4024001018andfb_source=message

174 From the characteristics of the exam, we can infer that theoretical and practical knowledge on Deaf culture is a fundamental part which a good interpreter cannot do without and which should, therefore, be included in the assessment parameters of professional exams.

6.4.2 TSLIAE

In this section, I will briefly analyze Taiwan’s current assessment tables and parameters, before formulating my own assessment model. I would like to particularly extend my gratitude to TSL interpreter Ginger Hsu for providing me with the TSLIAE (Taiwan Sign

Language Interpreting Assessment and Evaluation) grids (only in Chinese), laborious and meticulous (though not necessarily effective), used by the evaluators in Taiwan, and for kindly accepting my interview.46

Table 5 Oral language to sign language interpretation evaluation sheet (First

part of the exam)

Candidate’s name Exam date y/m/d/h Exam number Session Exam part Oral Language to Sign Language Interpretation Evaluation grid

(1) If any of the following applies, the candidate will automatically fail(Tick the □) □ 1.Absent □ 3.Take the exam on behalf of a third party □ 2. Not completed □ 4.If the candidate does not comply with the examination requirements, the exam will be invalid (2) If none of the above is applicable, the evaluation shall be as follows: Evaluation Per Reference standard Highest Actual Notes

46 The material provided was in Chinese, the English translation is mine. 175

components cen evaluatio evaluati tag n on e Fluency: Excellent (21 to 25) 1. Sign Language 25 Good (16 to 20) 25 Interpretation Skills % Acceptable (11 ~ 15) Sufficient (6 to 10) Insufficient (1 to 5) Correctness: Excellent (21 to 25) 2. Content Expression 25 Good (16 to 20) 25 Correctness % Acceptable (11 ~ 15) Sufficient (6 to 10) Insufficient (1 to 5) 3. Morphology and 25 Minus 0.5for every SL Application of Sign 25 % vocabulary omission Language Vocabulary Expression, deportment, dress: 4. Facial Expressions, 20 Excellent (16 to 20) 20 Deportment, Dress % Good (11 ~ 15 minutes) Acceptable (6 to 10) Insufficient (1 to 5) Time control: Excellent (21 to 25)

5.Sign-Oral Cued Good (16 to 20) 5% 5 Speech47 Acceptable (11 ~ 15) Sufficient (6 to 10) Insufficient (1 to 5) 100 Total 100 Final mark: %

47 Cued speech is a system of communication used with and among deaf or hard of hearing people. It is a phonemic-based system which makes traditionally spoken languages accessible by using a small number of handshapes (representing consonants or characters in TSL) in different locations near the mouth (representing vowels), as a supplement to lipreading. It is now used with people with a variety of language, speech, communication and learning needs.

176 Note: the evaluation is out of 100, 60 is the pass threshold

Exam result 1. □ Pass 2. □Fail Evaluation Committee (do not sign before the end of the test) Signature

Table 6 Sign language to oral language interpretation evaluation sheet (Second

part of the exam)

Candidate’s name Exam date y/m/d/h Exam number Session Exam part Sign Language to Oral Language Interpretation Evaluation grid

(1) If any of the following applies, the candidate will automatically fail(Tick the □)

□ 1.Absent □ 3.Take the exam on behalf of a third party □ 2. Not completed □ 4.If the candidate does not comply with the examination requirements, the exam will be invalid (2) If none of the above is applicable, the evaluation shall be as follows: Per Highest Actual Evaluation cen Reference standard evaluatio evaluatio Notes components tag n n e Fluency: Excellent (21 to 25) 1. Oral Language 25 Good (16 to 20) 25 Interpretation Skills % Acceptable (11 ~ 15) Sufficient (6 to 10) Insufficient (1 to 5) Correctness: 2. Content Excellent (21 to 25) 25 Expression Good (16 to 20) 25 % Correctness Acceptable (11 ~ 15) Sufficient (6 to 10)

177

Insufficient (1 to 5) 3. Morphology and 25 Minus 0.5for every Application of Oral 25 % vocabulary omission Language Vocabulary Tone (cadence): Excellent (16 to 20) 20 4. Tone Good (11 ~ 15 minutes) 20 % Acceptable (6 to 10) Insufficient (1 to 5) Time control: Excellent (21 to 25) 5. Oral-Sign Cued Good (16 to 20) 5% 5 Speech Acceptable (11 ~ 15) Sufficient (6 to 10) Insufficient (1 to 5) 100 Total 100 % Final mark: Note: the evaluation is out of 100, 60 is the pass threshold

Exam result 1. □ Pass 2. □Fail

Evaluation Committee (do not sign before the end of the test) Signature

Table 7 Oral sign language bidirectional interpretation evaluation sheet

(Hearing Evaluator Only) (Third part of the exam1)

Candidate’s name Exam date y/m/d/h Exam number Session Exam part Oral Sign Language Bidirectional Interpretation Evaluation grid

(1) If any of the following applies, the candidate will automatically fail(Tick the □)

□ 1.Absent □ 3.Take the exam on behalf of a third party □ 2. Not completed □ 4.If the candidate does not comply with the examination requirements, the exam will be invalid

178 (2) If none of the above is applicable, the evaluation shall be as follows: Per Highest Actual Evaluation cen Reference standard evaluatio evaluatio Notes components tag n n e Fluency: Excellent (21 to 25) 1. Oral Language 20 Good (16 to 20) 20 Interpretation Skills % Acceptable (11 ~ 15) Sufficient (6 to 10) Insufficient (1 to 5) Correctness: Excellent (21 to 25) 20 Good (16 to 20) 2. Tone 20 % Acceptable (11 ~ 15) Sufficient (6 to 10) Insufficient (1 to 5) Minus 0.5for every oral 3. Sign Language 20 language vocabulary 20 Interpretation Skills % omission Tone (cadence): Excellent (16 to 20) 4. Facial Expression, 20 Good (11 ~ 15 minutes) 20 Deportment % Acceptable (6 to 10) Insufficient (1 to 5) Time control: 5. Morphology and Excellent (21 to 25) 20 Good (16 to 20) Applied Use of 20 % Acceptable (11 ~ 15) Vocabulary Sufficient (6 to 10) Insufficient (1 to 5) 100 Total 100 % Note: the evaluation is out of 100, 60 is the pass threshold Final mark: Exam result 1. □ Pass 2. □Fail Evaluation Committee (do not sign before the end of the test) Signature

179

Table 8 Oral sign language bidirectional interpretation evaluation sheet (Deaf

evaluator only) (Third part of the exam2)

Candidate’s name Exam date y/m/d/h Exam number Session Exam part Oral to Sign Language Interpretation Evaluation grid

(1) If any of the following applies, the candidate will automatically fail(Tick the □)

□ 1.Absent □ 3.Take the exam on behalf of a third party □ 2. Not completed □ 4.If the candidate does not comply with the examination requirements, the exam will be invalid (2) If none of the above is applicable, the evaluation shall be as follows: Per Highest Actual Evaluation cen Reference standard evaluatio evaluatio Notes components tag n n e Fluency: Excellent (21 to 25) 1. Sign Language 25 Good (16 to 20) 25 Interpretation Skills % Acceptable (11 ~ 15) Sufficient (6 to 10) Insufficient (1 to 5) Correctness: 2. Content Excellent (21 to 25) 25 Good (16 to 20) Expression 25 % Acceptable (11 ~ 15) Correctness Sufficient (6 to 10) Insufficient (1 to 5)

3. Morphology and Minus 0.5for every sign 25 Applied Use of Sign language vocabulary 25 % Language Vocabulary omission Facial expression, 4. Facial Expression, 20 deportment: 20 Deportment % Excellent (16 to 20)

180 Good (11 ~ 15 minutes) Acceptable (6 to 10) Insufficient (1 to 5) Time control: Excellent (21 to 25) 5. Sign-Oral Cued Good (16 to 20) 5% 5 Speech Acceptable (11 ~ 15) Sufficient (6 to 10) Insufficient (1 to 5) 100 Total 100 % Note: the evaluation is out of 100, 60 is the pass threshold Final mark: Exam result 1. □ Pass 2. □Fail Evaluation Committee (do not sign before the end of the test) Signature

Table 9 Sign Language Interpreter Certification on Technical Subjects

Evaluation Sheet (Hearing Evaluator Only)

Score Sector Skill kinds Grading standard ratio

First part: no rating

Subjects Oral Language Interpretation 25% Skills

Content Expression 25%

Correctness

Morphology and Applied Use 25% Second part: sign to oral of Oral Language Vocabulary

181

language interpretation Tone 20% Total out of 100 5%

Oral-Sign Cued Speech

Oral Language Interpretation 20% Skills Tone 20%

Sign Language Interpretation 20% Third part: oral to sign language Skills bidirectional interpretation Facial expression, deportment 20% Total out of 100 20%

Morphology and Applied Use of Vocabulary

Table 10 Sign language interpreter certification on technical subjects evaluation

sheet (Deaf evaluator only)

Score Sector Skill kinds Grading standard ratio

Sign Language Interpretation 25% Skills

Content Expression 25%

Correctness Subjects Morphology and Applied Use 25% First part: oral to sign language of Sign Language Vocabulary interpreting Facial expression, 20% Total out of 100 deportment, dress

182 5%

Sign-Oral Cued Speech

Second part: no rating

Sign Language Interpretation 25% Skills Content Expression 25% Correctness

Morphology and Applied Use 25% Third part: oral to sign language of Sign Language Vocabulary bidirectional interpreting 20% Total out of 100 Facial expression, deportment 5%

Sign-Oral Cued Speech

Table 11 Sign language interpreter certification on technical subjects (Final

general evaluation sheet)

Candidate’s Exam date y/m/d/h name

Exam Supervisor’s (do not sign before the end of

number signature the test)

183

Pass Fail Absent Overall

results

Result for each part Evaluators’ Absent Components Points Notes signature Pass Fail Absent

Oral Language to First part Sign Language Interpretation Sign

Language to Second Oral Part Language

Interpretation

Oral Sign

Language Third part Bidirectional

Interpretation

In the aforementioned eight tables, the reader can see some differences compared to the EIPA model. The most striking one is probably the importance attributed to facial expressions and proxemic48 features, such as deportment and even dress code.

Indeed, facial expressions are an indispensable linguistic element to be found in

48 Proxemics can be defined as the interrelated observations and theories of man’s use of space as a specialized elaboration of culture (Hall, 1966).

184 every sign language because they convey some fundamental grammar traits of that given language. For example, a “yes/no” question requires raising the eyebrows and widening the eyes while leaning the head forward. On the other hand, a “wh-word” question i.e., “What? Where?” requires lowering the eyebrows and leaning the head forward. Eye gazes, eye shifts, clenched teeth, and head tilts are examples of gestures and facial expressions used to convey ideas of distance and direction.

Comparisons between people, places and things are often expressed in head and body shift gestures, thus conveying grammatical traits. Also, raising the eyebrows, while giving the sign for describing a person that is present, is also used for someone who is absent, only the sign changes. It is therefore very important that the signer connects the correct facial expression with the particular sign.

Facial expressions are a fundamental component of what is known as Natural

Sign Language (NSL) as opposed to Manual Sign Language (MSL). It seems opportune to further clarify the distinction between these two types of sign language, which will be further explored in the next paragraph, before trying to sum up the main traits of both the EIPA evaluation grid model and the currently used TSLIAE evaluation grid model as an attempt to draft a new model, more concise, yet equally effective.

6.4.3. The issue of “naturality”: natural sign language (NSL) vs. manual sign language (MSL)

In this paragraph, I am trying to address the issue about “the nature of naturality” in

TSL, or more precisely in NTSL (Natural Taiwan Sign Language). What can be

185 defined as NTSL? What is the defining and limiting border between natural and manual sign language? Which one should be examined and tested in professional certifications?

In a recent international handbook published by Pfau et al. (2011), there is an interesting historical perspective on sign language linguistics. According to the handbook, research in the past 50 years has proven the fact that sign languages are indeed independent natural languages with well-formed and complex grammatical systems, no less compact and developed than those of spoken languages. Simply put, natural languages exist in two distinct modalities, the visual-manual modality of sign languages and the auditory-oral modality of spoken languages.

From a diachronic point of view, the history of linguistic research can be divided into three different periods. In the first, researchers focused on the underlying identity between spoken and signed languages. Determined and undeterred to prove the linguistic status of sign languages against what most people believed to be only rough and underdeveloped pantomime and gestures, early sign linguists

(over)de-emphasized the role of iconicity in sign languages. The sign language most investigated in this period was ASL. As a consequence, there was little typological research.

The second phase started in the 1980s. Linguists and researchers started investigating the issue of modality along with similarities and differences between sign(ed) and spoken languages. Their aim was mainly to analyze the influence of modality on linguistic structure, in modality-specific properties of sign(ed) and spoken languages, and in modality-independent linguistic universals as well as psycho- and neurolinguistic processes and representations. Starting from the observation that sign languages seem to be typologically more homogeneous than spoken languages, many grammatical properties of sign languages were related to

186 specific properties of the visual-manual modality.

However, in the first two phases, i.e. the early and modern periods, research was mainly focused on the comparison of sign languages and spoken languages, whilst cross-linguistic studies on sign languages were quite rare.

Only once non-Western sign languages began to be studied, did it become clear that sign languages show more variation than originally predicted. This third phase, which started in the late 1990s, is known as the post-modern period, approached sign language typology more seriously. Today, we can observe an increasing interest in comparative and experimental studies on sign languages at all linguistic levels, and of less studied (Western and non-Western) sign languages, such as the one which is the object of our study, namely TSL (Taiwan Sign Language).

TSL, just like any other sign language, can be divided into manual sign language, a.k.a manually coded language, and natural sign language. While manual sign language can be defined quite accurately, with several renowned major approaches, it seems more problematic to define with the same accuracy the properties of natural sign languages.

Manual sign languages are representations of spoken languages in a gestural-visual form. In other words, they can be defined as a “sign language” version of spoken languages. These languages are not natural, insofar as they were invented by hearing people and strictly follow the grammar of the written form of oral languages. They have not evolved naturally in Deaf communities. In the past, manual sign languages have been mainly used in Deaf education, thus causing a major trauma in the development of deaf children’s native language, and by past generations’ sign language interpreters. It goes beyond the purpose of this paper to provide a historical excursus of the genesis and development of manually coded languages as well as delving into the controversial issues between the French oralist 187 school, back in Épée's time, and their controversies with the manually coded language system. Suffice it to say that the emerging recognition of sign languages in recent times has curbed the growth of manually coded languages, and in many places, interpreting and educational services now favor the use of the natural sign languages of the Deaf community. As far as TSL is concerned, the situation is slightly different for historic and geographic reasons.

Languages, both signed and oral, are alive, thus they change in time. As previously mentioned, TSL is closer to JSL (Japanese Sign Language) than to CSL

(Chinese Sign Language) for historical reasons. When Taiwan was occupied by the

Japanese, they brought along their language, both oral and signed. In Taiwan

Mandarin Chinese, especially in the elderly generations, these traits are still present.

The same happened with signed languages. However, later on, in 1973 according to sign language local interpreter Ginger Hsu (personal communication, 2012), the national government wanted to unify all the different types of signed languages present on the island of Taiwan. Back then, the two most renowned schools were in

Taipei and Tainan. Officials and linguists decided to preserve characteristics from both schools and to fuse them together with manually coded language traits. In other, words the two systems, natural vs. manual sign language, started melting together after 1973, so that it seems hard if not impossible to separate one from the other in a linguistically purist way.

According to Ginger Hsu (personal communication, 2012) this fusion is irretrievable to the extent that some first generation interpreters are “too” natural, i.e. use a sign language that young Deaf generations are not familiar with anymore.

According to the communicative principle, the important thing is that the message comes across to the Deaf interlocutor/evaluator flawlessly, irrespective of whether during an exam a candidate uses only natural signs or a mixture of natural

188 and manual.

If one had to find some factors which altogether could define the main properties of natural sign language as opposed to manual, I would say that the following ones could be taken into consideration: word order, linear structure vs. simultaneous structure, and prominence of facial expressions. The list is not meant to be exhaustive.

The aforementioned factors can be defined as the conditio sine qua non for the

“naturality of sign”.

In linguistic typology, word order defines the sentence structure. If the language is S-V-O, it means that the subject comes first, the verb second, and the object third.

Languages may be classified according to the dominant sequence of these elements.

Though important and classifying, it does not seem to be the most important factor in the definition of natural Taiwan sign language, insofar as it seems to be more flexible and elastic according to which element of the sentence is emphasized by the signer and also because where there is agreement, word order does not seem to be that rigid.

On the other hand, the linear vs. simultaneous structure seems to be much more defining. Natural sign languages, including TSL, make ample use of the space around their body and surrounding their hands, especially for locative verbs and comparison structures. For instance, if one signs the sentence “the cat eats the mouse” word by word (sign by sign) it is an obvious manual representation of the concept to be conveyed because in natural sign language both elements (subject and object) would be signed in different collocations in the space surrounding the signer, and the verb would proceed from the agent towards the patient in a “3D pattern”. This seems to be quite natural because from a cognitive point of view, grammar may be defined as the attempt to describe the interaction of the participants, so it seems rational to 189 first describe who the participants are, so that the verb happens to be in the last place.

The third factor is facial expressions, which, as previously mentioned, convey the grammatical parts of speech. All these elements, which are to be found in every natural sign language, should also somehow be present in the evaluation grid used for TSL interpreting evaluation.

In the next section, I will try to reconcile all the different factors analyzed so far.

6.5 Tentative new TSLIAE (nTSLIAE) model

However, as mentioned in the previous chapter, facial expressions are indeed a fundamental grammatical component of any sign language as confirmed by the other interpreter I interviewed (Ginger Hsu, personal communication, 2012). As a consequence, facial expressions should be present in any TSLIAE grid, and they are as we have seen in tables 5 through11.

Another important factor is the candidate’s knowledge of Deaf culture, which is not explicitly present in any of the aforementioned tables. According to Ginger Hsu

(personal communication 2012), cultural knowledge is a fundamental asset for translating.

Furthermore, deontological parameters are also important. For instance, students and candidates should be aware of the fact that in Taiwan, sign language interpreters do not enjoy the same rights and equal status as oral language interpreters and that per day they can get at the most 1600 dollars (Ginger Hsu, personal communication, 2012), even if linguists have long proven that sign languages are indeed languages.

190 As previously mentioned, research in the past 50 years has proven beyond a doubt that sign languages are complete and independent natural languages with complex grammatical systems that are fully comparable to the grammatical systems of spoken languages. In other words, natural languages exist in two different modalities, the visual-manual modality of sign languages and the auditory-oral modality of spoken languages.

Given these premises, it seems opportune to have an equally effective evaluation grid but less complex than the ones previously mentioned because as one of the interpreters in the evaluation committee admitted to me, they do not always follow the grids available, rather evaluate candidates according to the communicative principle, i.e. if the candidate is interpreting into sign language then it is essential for the Deaf evaluator to understand what s/he is signing, or vice versa, if the candidate is interpreting into an oral language, it is important for the hearing evaluator to understand what s/he is talking about. Finally, the interpreter in the evaluation committee proceeds to compiling the evaluation grid.

We think it necessary to have a more concise evaluation grid which enables both

Deaf and hearing evaluators to assess the candidate as objectively as possible.

Table 12 New TSLIAE (nTSLIAE) Model

Candidate’s name Exam date y/m/d/h Exam number Session Exam part Oral Language to Sign Language Interpretation General Evaluation Grid

(1) Before the interpreting exam, candidates should answer a series of questions aimed at testing their knowledge on Deaf culture, their awareness of work conditions, and their deontological approach to ethics. (Tick the □)

191

□ Pass □ Fail

(2) Is the communicative principle satisfied? Is the message conveyed without any major problem and/or comprehension obstacle to both the Deaf and hearing evaluators? If the aforementioned is not met even with only one evaluator, the candidate will automatically fail (Tick the □)

□ Pass □ Fail (2) If the two above are passed, the evaluation shall be as follows: Per Highest Evaluation cen Actual Reference standard evaluatio evaluatio Notes components tag n n e Fluency: Excellent (21 to 25) 1. Interpreter 25 Good (16 to 20) 25 performance49 % Acceptable (11 ~ 15) Sufficient (6 to 10) Insufficient (1 to 5) Correctness: 2. Clarity and Excellent (21 to 25) 25 Good (16 to 20) completeness of the 25 % Acceptable (11 ~ 15) message Sufficient (6 to 10) Insufficient (1 to 5) Minus 0.5for every 3. Key vocabulary 20 vocabulary omission (oral 20 represented % or signed)

49 Including oral and signed language interpretation skills, whether the grammar being used by the candidate pertains to Natural Taiwan Sign Language (NTSL) or Manual Signed Chinese (MSC), use of Mandarin Chinese syntactic markers, correct word selection, no omission and/or adding, amount of sign vocabulary.

192 Excellent (14-15) 4. Prosodic and 15 Good (11 ~ 13 minutes) 15 proxemics awareness50 % Acceptable (6 to 10) Insufficient (1 to 5) 5. Production of Excellent (14-15) 15 Good (11 ~ 13 minutes) numbers and Chinese 15 % Acceptable (6 to 10) characters spelling Insufficient (1 to 5) 100 Total 100 % Note: the evaluation is out of 100, 60 is the pass threshold Final mark:

Exam result 1. □ Pass 2. □Fail

Evaluation Committee (do not sign before the end of the test) Signature

As the reader can see from the above table, it presents several benefits compared to the other aforementioned tables. First of all, it gives evaluators the opportunity to carry out an overall, yet specific, evaluation of the candidate with no need to have sub-tables for all three different parts of the exam.

Another aspect that the reader might notice at first glance is the threshold for taking the exam. Candidates must meet with the requirements in (1) before taking the exam, and in (2) before being evaluated by the experts’ committee.

It is a model aimed at delineating the tools that could evaluate TSL adult interpreters in real situational contexts, assess the proficiency of TSL interpreting students’ performances and assess the proficiency of educational interpreters by developing suitable tools for TSLI.

This new TSLIAE (nTSLIAE) model puts more emphasis on the importance of aspects such as the knowledge of Deaf people’s culture, which is a necessary

50 Including facial expressions (conveying both grammar and emotions), deportment, use of space around the body, appropriate eye contact/movement, and dress. 193 prerequisite for any interpreter to do a decent job, or candidates’ awareness on their own future job according to local rules and regulations. The different evaluation components are also made simpler, for the benefit of the evaluating committee, and some additional features, like the “production of numbers and Chinese characters spelling” item is added, for reasons of completeness. Finally, the significance of prosodic and proxemic elements (which, according to Hall (1966), can be defined as the interrelated observations and theories of man’s use of space as a specialized elaboration of culture), such as facial expressions (conveying both grammar and emotions), deportment, use of space around the body and surrounding the hands, appropriate eye contact/movement and dress, is highly emphasized for all the various reasons illustrated throughout the paper.

In the next section, I am going to illustrate some limitations intrinsic to the present chapter.

6.6 Conclusion and limitations of this chapter

This preliminary attempt at establishing common parameters for Taiwan Sign Language

Interpreting Assessment and Evaluation (TSLIAE) trainee interpreter’s performance, in educational interpreting and in real situational contexts is, by definition, preliminary and tentative, thus introductory in its nature with no pretense to exhaustivity whatsoever.

Furthermore, many concepts could be further developed both in terms of literature exploration and practical application. This may be considered as a pilot study, an approximate model of what tools could be used in the evaluation and assessment of TSL interpreting by making reference to the relevant literature, to the American Educational

194 Interpreter Performance Assessment (EIPA), to Taiwan’s current assessment parameters and to the precious help of a series of deaf and hearing signers and sign language interpreters. It could, however, be further explored and restricted by further research.

This chapter is just a first step towards the exploration of what parameters could possibly be commonly shared in TSLIAE, with an emphasis on its tools and to apply it in practical contexts, by providing examples or lesson plans, which will be done separately. Further issues such as specifics of cross-culturally mixed class, or how to deal with specific strategies of TSL interpreting and how to evaluate it, or even further to evaluate the way specific linguistic structures like proverbs or idiomatic expressions are rendered in sign language will be discussed elsewhere. This chapter was merely an attempt in bringing a fresh perspective into interpreter assessment and evaluation techniques in a domain which is not much researched into by interpreting scholars.

The point of departure for the present conceptual chapter was the pedagogical and professional need to find commonly shareable parameters for the assessment and evaluation of Taiwan Sign Language (TSL) interpreters’ performance, irrespective of their degree of professionalism, both in the classroom setting and in real, authentic situational contexts.

Hence, the theoretical and speculative idea to propose a model, called the TSLIAE, to establish a set of tools which could be used both in TSL interpreting training programs and outside of the classroom. This chapter is unique in its kind, because it is the first preliminary attempt to discuss assessment of TSL interpreting from an academic point of view. As previously mentioned, the debate on TSL interpreting in general has been very modest and related papers far and few between. Hence, the need to find a pedagogical standard to be adopted as an evaluation parameter, namely the

TSLIAE model based on the American EIPA, i.e. the Educational Interpreting

Performance Assessment Tool. 195

As already mentioned, according to the TSLIAE model, performances can be divided into five different levels, namely beginner, advanced beginner, intermediate, advanced intermediate and advanced which corresponds to the professional level. In the

TSLIAE model, just like in the EIPA system, evaluators use a Likert Scale to assess specific skills, as illustrated in the third section. Scores for each skill range from 0 (no skills demonstrated) to 5 (advanced native-like skills). The scores from all different evaluators are averaged for each skill area, each domain, as well as the overall test score.

An individual’s final score is the summary total score. For example, an interpreter could report her/his score as TSLIAE Secondary PCE 4.2, which represents the grade level, the language modality, and the total summary TSLIAE scores. Errors should also be carefully subdivided because according to Nord’s classification of errors51, pragmatic errors may put the global functionality of the communicative interaction at a high risk.

Cultural errors do not obstruct communication per se, but they make it more difficult and may shed a negative light on the speaker’s social image. Finally, linguistic errors or mistakes may turn into pragmatic ones if they lead to misunderstandings, thus entailing more or less important consequences on the overall understanding of the message.

Apart from this model, this chapter has also benefited from Taiwan’s current assessment parameters, from the help of local Taiwan Sign Language interpreter Ginger

Hsu, and from the precious help of a series of deaf and hearing signers and sign language interpreters, without whom this study couldn’t have been possible.

Our new postulated TSLIAE (nTSLIAE) model puts more emphasis on the importance of aspects such as the knowledge of Deaf people’s culture, which is a necessary prerequisite for any interpreter to do a decent job, or candidates’ awareness on

51 Nord (1997), presented at the seminar held in NTNU in November 2010. In the present study, it is not further analyzed for reasons of space. For an insightful discussion on error classification in Nord’s theory, please refer to Nord (1997).

196 their own future job according to local rules and regulations. Last, but not least, we should also consider the factors that there may be confounding factors in the process of evaluation, which are the ones we have analyzed in the previous chapters. Also, sometimes the signed text used during the exams is not natural signed language but signed Chinese (Tai, personal communication, 2012), thus candidates should have a thorough preparation to be able to tackle the kind of signed language that is being used during the exam session.

As previously mentioned, the different evaluation components that we propose are also made simpler, for the benefit of the evaluating committee, and some additional features, like the “production of numbers and Chinese characters depiction” item is added, for reasons of completeness. Finally, the significance of prosodic and proxemic elements, such as facial expressions (conveying both grammar and emotions), deportment, use of space around the body and surrounding the hands, appropriate eye contact/movement and dress, is highly emphasized for all the various reasons illustrated throughout the chapter.

It is up to future research to put this tentative exploration to the test and carry out some experiments in a classroom setting to establish the validity and effectiveness, or lack thereof, of this assessment method in TSL interpreter training programs. Reflecting on alternative training models and evaluation criteria will help trainers become more aware of what they are doing in terms of task assessment and will also provide them with further ideas to facilitate not only the evaluation process but also the design of the examinations and the way students should be classified according to their performances.

Also, as previously mentioned, at present, thanks to the help of professional and socially active interpreter Ginger Hsu, who collaborates with quite a few similar organizations, I am trying to collect questionnaires aimed at analyzing the quality aspect

197 from the users’ end, i.e. from the Deaf community point of view, so as to see the way they approach the issue.

198 CHAPTER SEVEN CONCLUSION

7.1 A review of the chapters

This thesis was originally intended to fill a gap in the literature because it is the first academic contribution dealing with Taiwan Sign Language (TSL) interpreting-related issues.

Before beginning this journey, my main motivation was to prove that the neurobiological efforts underlying the interpreting task are irrespective of the modality, by means of a behavioral study which was illustrated in chapter five and which is a reduplication of an adaptation of the original research design which was conceived by Gile (1989), namely the tightrope experiment.

However, while I was investigating TSL, I decided to extend my discussion to other issues related with TSL in general, like the issue of training and performance quality or some challenging areas which increase the neurobiological efforts, like for example figurative speech and metaphors.

To take a closer look, chapter one and chapter two focused on TSL history while chapters three to seven on TSL interpreting. Considering the fact that not every reader has previous background knowledge on sign language(s), it seemed opportune to introduce TSL and its history before delving into interpreting per se.

Some issues that motivated me in the research were some problems or inadequacies within TSL interpretation system, like the importance of raising the dignity of TSL interpreters, which I tried to do also by way of a behavioral study, and the need to improve quality of the interpretation itself.

199

The second chapter focused on issues such as the historical development of TSL, but also on the diatopic and diachronic variation. These topics are covered not only as an introduction to TSL but also because they all influence directly the interpreting performance, insofar as the interpreters should adopt certain strategies to deal with these segments of speech and should be very flexible in dealing with different varieties of TSL. In this chapter, there is also a linguistic description of language evolution theories and on how in the 19th century some linguists believed there was an evolution from the hands to the mouth, thus denigrating signed languages to an inferior status. Given this purported inferior status, linguists have been trying to artificially create linguistic systems which might enable Deaf people to grasp the written grammatical language properly. These systems include cued speech, manually coded language, lip reading, oralism and grammar sign language. Once again, these systems are described because TSL interpreters should be aware of them to tackle any difficulty that might generate from an erroneous use of these mechanisms by the Deaf people themselves, as illustrated in the second chapter.

In the third chapter, the history of TSL interpreting is introduced. A corpus of

TSL interpreters have been surveyed to ensure whether the precarious and unprofessional conditions dictated by the government are indeed so. Under the hypothesis that indeed they are so, the rest of the research is fully aimed at proving my thesis, i.e. bimodal interpreters should share the same professional dignity as oral interpreters. The second paragraph of the third chapter is an analysis of TSL interpreting history. The fourth paragraph is titled “professional volunteers”. This title is a pun. It reflects the almost volunteering nature of TSL professional sign language interpreters nowadays, considering the straitened conditions in which they work and it is also a window of reflection on many other sectors, where professionals are really volunteers, which I have personally come in contact with. In

200 other words, with the third chapter, the attention was shifted directly on TSL interpreting and related issues, by focusing on the interpreting history and on the status quo of TSL interpreters in Taiwan. As previously mentioned, I also focused on important issues such as performance quality and the issue of assessment in training which is the conditio sine qua non for a good to-be interpreter. The final part of the third chapter underlines the importance that is given to professional evaluation after many years of sign language interpreting history, not only in Taiwan but also abroad

(cf. Malcolm Williams, 2004) and will be further emphasized in the chapter dedicated to the issue of TSL interpreting assessment and evaluation.

Chapter four further explores some challenging areas of TSL interpreting, namely challenging areas such as figurative speech and metaphors, which will have to be taken into consideration in the evaluation process. This chapter is aimed at proving that the efforts underlying sign language interpreting are at the basis of the necessity of turn-shifting on stage while interpreting at a sign language event.

Chapter five was an exhaustive review of those neurolinguistics studies which irrefutably prove the fact that sign language is indeed a language at all effects and should be treated as such, because signers show exactly the same cerebral lateralization as oral speakers, irrespective of the modality through which is conveyed their language and even though their articulators are visuo-spatial in nature. Therefore, according to neurolinguistic studies sign language is a language at all levels and at all effects as proven by the neurobiological mechanisms underlying its processing, even though it is visual rather than auditory and spatially rather than sequentially organized; and as Sacks (1989: 76) points out “as a language, it is processed by the left hemisphere of the brain which is biologically specialized for just this function”. This is also a proof of the plasticity of the human brain, because it is as if the left hemisphere in signers modified the visual-spatial characteristics into a whole new 201 analytical concept, making it a language of its own, with its own rules and developing the potentials intrinsically present in the neurobiological mechanisms of the human brain. Apart from the neurolinguistics-related literature, chapter five was also a study aimed at strengthening the tightrope hypothesis by way of reduplicating an experiment originally designed by Gile (1989). The findings of the study illustrated in chapter five also strengthen the Effort Models’ “tightrope hypothesis” that many e/o’s are due not to the intrinsic difficulty of the corresponding source-speech segments, but to the interpreters working close to processing capacity saturation which in Gile

(1989)’s words “makes them vulnerable to even small variations in the available processing capacity for each interpreting component”.

Also, another interesting aspect which emerged from the afore-mentioned study is the higher difficulty to render certain expressions in sign language in the simultaneous mode because of the intrinsic explanatory need. These expressions are part of the challenging areas of in the interpreting task, of which probably the hardest to convey in a target culture and language (Deaf) is probably figurative speech and metaphors, which were analyzed in chapter four. Also, as we saw in table

3, most of the new errors in the second version were committed by the oral interpreters. This was an interesting phenomen which deserved to be further explored and I did so by interviewing the oral interpreters and ask them the reason why they had made more errors in the second version, because otherwise this could also be interpreted as more neurobiological efforts on the part of oral interpreters.

However, all the participants who made more mistakes in the second version unanimously told me that the reason why this happened was because after hearing the speech once, they thought they were ready to embellish or enhance it with better expressions. This process took time away from the normal flow of interpreting. In other words, they confirmed themselves that this was not due to an intrinsic

202 difficulty of the text, to biased materials, or let alone a greater effort compared to their signing colleagues.

7.2 Concluding remarks and future research

As claimed by the title of our dissertation, this study has been a journey towards professional equality of TSL interpreters. This study hopes to have proven the fact that signed languages are languages at all effects from a neurobiological point of view and that this is the reason why TSL interpreting is by no means inferior, thus should not be treated with a different benchmark, from oral interpreting, even from the point of view of professional’s retribution, irrespective of budget issues.

Future research has several development directions. From the point of view of quality issues, future research could put my tentative exploration to the test and carry out some experiments in a classroom setting to establish the validity and effectiveness, or lack thereof, of the assessment method, herein elaborated, in TSL interpreter training programs. Reflecting on alternative training models and evaluation criteria will help trainers become more aware of what they are doing in terms of task assessment and will also provide them with further ideas to facilitate not only the evaluation process but also the design of the examinations and the way students should be classified according to their performances.

In future studies, some of the possible research questions that scholars might want to focus their attention on are:

(f) Why is scientific research important in the field of SI and SI pedagogy?

(g) What are the main challenges, in terms of equipment, that scholars face in

neurolinguistics research when it comes to SI and most especially to SI

production analysis? 203

(h) How could researchers overcome these technical difficulties?

(i) Finally, what might be the differences between novice interpreters and expert

interpreters in terms of their brain functions and cognitive structures?

(j) How can these researches be expended to sign interpreting?

As previously mentioned, future research is warranted to develop longitudinal studies which might possibly focus on the development of expertise in interpreting, thus shedding further light on the brain plasticity of interpreters.

Finally, from the point of view of figurative speech and metaphors, it would also be interesting to investigate the acquisition and recall of metaphorical versus non-metaphorical TSL signs by native signers and non-native signers. Also, TSL signs could be used to investigate the explanatory power of the structural similarity alternative versus the conceptual metaphor analysis. Thus, empirical research is warranted to shed light on other rhetorical aspects of TSL, including empirical data on the use of metaphor by signers, both native and non-native. It is quite an unexplored field and opens the road for a better understanding of the cognitive mechanisms underlying sign language, more specifically TSL, where research leaves to be desired, and for a more thorough analysis, in a postmodern approach, of the same cognitive mechanisms in all other modality of languages in a comparative way.

Indeed, studying speech-sign bimodal bilinguals can elucidate the language code-switching mechanisms and the cognitive inhibition mechanisms. The question of how a bilingual subject monitors two languages within a brain has been investigated by neuropsuchological studies.

Neuroimaging studies have found that the interference between two languages, irrespective of the modality, is associeated with specific regions of the brain, particularly the left IFG, the dorsolateral prefrontal cortex, the anterior cingulated cortex and the STG (Abutalebi et al. 2011). Indeed, the experience of signed

204 language might influence our cognitive processing and shape our brains (Chiu 2006) and in this sign language interpreters are a window providing a very good research opportunity.

More professional interpreters will be needed in the future in specialized sectors, like the medical or the legislative one. One of the main challenges for the future is to come up with the lexicon that does not exist yet in sign language; therefore it is necessary for interpreters to have a linguistic backgroung to be able to come up with new lexemes according to semantic word-formation rules.

7.3 Limitations of the study

It seems opportune to point out one the limitation of this study before concluding.

The behavioral study carried out in chapter five, which proved our tightrope hypothesis, was carried out with a sample of ten professional interpreters, of which five were sign language interpreters and five oral interpreters. The reason we did this was to have a control group (the oral interpreters) and compare the efforts of the two groups. However, we think that the sample of signed language interpreters, though representative is not ideally big. The reason for this lies in most interpreters’ reticency to accept interviews or do experiments. Although all participants were duly paid for their contribution, it was still very laborious to find people willing to take part in the study. This trend is not exclusive of sign language, the same thing happens with researches on oral interpreting, because what most interpreters do not understand is that their contribution is essential for the research, and the results remain anonymous. Although this is clearly stated to each and every one of the

205 people who were contacted and invited to take part, most of them are still afraid to lose their face in case of a poor performance, without understanding that the object of our research is not aimed at evaluating how good they translational skills are but to analyze and investigate the underlying neurobioogical mechanisms. Also, the samples of interpreters and Deaf people used was in line with the principle according to which in qualitative research smaller but focused samples are more often needed than large samples (Denzin and Lincoln 2005).

206 REFERENCES

ABUTALEBI, J., BRAMBATI, S. M., ANNONI, J. M., MORO, A., CAPPA, S. F.

and PERANI, D. (2007): The neural cost of the auditory perception of

language switches: an event-related functional magnetic resonance imaging

study in bilinguals. J. Neurosci., 27(50), 13762-9.

ABUTALEBI, J. DELLA ROSA, P.A., GREEN, D.W., HERNANDEZ, M. SCIFO,

P., KEIM, R., CAPPA, S.F., COSTA, A. (2011): Bilingualism Tunes the

Anterior Cingulate Cortex for Conflict Monitoring. Cereb Cortex.

AGLIOTI S., FABBRO F., (1993): “Paradoxical selective recovery in a bilingual

following subcortical lesions”, Neuroreport 4, pp. 1359-1362.

AKAMATSU, C.T. andARMOUR, V.A. (1987): Developing written literacy in Deaf

children through analyzing Sign language.American Annals of the Deaf,

132, 46-51.

AMORINI, Giuseppe, CORTAZZI, Maria Carmela and ELETTO, Gian Maria (2000):

Idiomatismi e metafore nell'interpretazione in e dalla LIS. In: Caterina

BAGNARA, Gianpaolo CHIAPPINI, Maria Pia CONTE and Michela OTT,

eds. Viaggio nella città invisibile. Atti del 2° Convegno nazionale sulla Lingua

Italiana dei Segni. Genova, 25-27 settembre 1998. Pisa: Edizioni del Cerro,

181-194.

ANN, J., MYERS, J. and TSAY, J. (2007): "Lexical and articulatory influences on

the perception and production of words in Taiwan Sign Language." Paper

presented at the 12th International Conference on the Processing of East

Asia Related Languages. December 28-29, 2007. National Cheng Kung

University. Tainan, Taiwan.

207

ANN, Jean, MYERS, James and TSAY, Jane (in press): Lexical and articulatory

influences on phonological processing in Taiwan Sign Language. In: Rachel

CHANNON and Harry VAN DER HULST ,eds. Formational units in sign

language. Nijmegen, The Netherlands: Ishara Press.

ANN, Jean and LONG, Peng (2000): Optimality and Opposed Handshapes in Taiwan

Sign Language. In: Katherine CROSSWHITE and Joyce MCDONOUGH

MAGNUSON, eds. University of Rochester Working Papers in the Language

Sciences, 1 (2), 173-194Language Sciences, Vol.1, 2.

ANN, Jean, MYERS, James, and TSAY, Jane (2007): Lexical and articulatory

influences on the perception and production of words in Taiwan Sign

Language. Paper presented at the 12th International Conference on the

Processing of East Asia Related Languages. December 28-29, 2007.

National Cheng Kung University. Tainan, Taiwan.

ARISTOTLE (1932): Vol. 23, translated by W.H. Fyfe.Cambridge, MA thesis,

Harvard University Press.

ARISTOTLE, Poetics, (1984): trans. I. Bywater, in The Complete Works of Aristotle,

ed. Jonathan Barnes (Princeton: Princeton University Press), vol 2, 1457b.

ARISTOTLE: Poetics. (1996): Translated by Stephen Halliwell; Longinus: On the

Sublime; Demetrius: On Style (Loeb Classical Library No. 199), 1457b.

ATKINSON, J. et al. (2005): Testing comprehension abilities in users of British sign

language following CVA. Brain Lang. 94, 233–248.

BATTERBURY, Sarah (2012): Language Policy 11:253–272.

BAUMAN, Dirksen (2008): Open your eyes: Deaf studies talking. University of

Minnesota Press.

BAVELIER D, BROZINSKY C, TOMANN A, MITCHELL T, NEVILLE H, LIU G.

(2001): Impact of early deafness and early exposure to sign language on the

208 cerebral organization for motion processing. Journal of Neuroscience

21(22):8931–8942.

BELLUGI, Ursula (1980): “Clues from the Similarities between Signed and Spoken

Languages”. In Signed and Spoken Language: Biological Constraints on

Linguistic Form, ed. U. Bellugi and M. Studdert-Kennedy. Weinheim and

Deerfield beach,Fla.: Verlag Chemie.

BELLUGI, Ursula; STUDDERT-KENNEDY, M. (1980): Signed and Spoken

Language: Biological Constraints on Linguistic Form Deerfield Beach, FL:

Weinheim.

BELLUGI, Ursula and HICKOK, Gregory (1995): Clues to the neurobiology of

language. In: BROADWELL, ed. Neuroscience, memory, and language:

Decade of the brain, volume 1, 89-107. Washington, DC: Library of Congress.

BELLUGI, Ursula and KLIMA, Edward (1997b): Language, spatial cognition and the

brain. In: ITO, MIYASHITA and ROLLS, eds. Cognition, computation and

consciousness. Cambridge, England: Oxford University Press, 177-189.

BELLUGI, Ursula (1991a): The link between hand and brain: Implications from a

visual language. In: David MARTIN, ed. Advances in Cognition, Education

and Deafness. Washington, DC: Gallaudet University Press, 11-35.

BELLUGI, Ursula (1991b): Language and cognition: What the hands reveal about the

brain. Abstract, Society for Neuroscience, 17(2), 581.

BELLUGI, Ursula (1992): Language, spatial cognition and brain organization. In The

Neuronal Basis of Cognitive Function. , NY: Thieme Medical

Publishers, 207-222.

BELLUGI, Ursula (1993): Language, spatial cognition, and neuronal plasticity.

Proceedings of the American Psychiatric Association.

BELLUGI, Ursula (2001): Sign Language. In: Neil SMELSER and Paul BALTES, eds. 209

International Encyclopedia of the Social and Behavioral Sciences , Vol. 21,

14066-71. Oxford, United Kingdom: Elsevier Science Publishers.

BELLUGI, Ursula, HICKOK, Gregory and KLIMA, Edward (1997a): Sign language

aphasia: A window on the neural basis of language. Scientific American.

BELLUGI, Ursula, KLIMA, Edward and HICKOK, Gregory (2010): Brain organization:

Clues from deaf signers with left or right hemisphere lesions. In: Luis CLARA,

ed. Gesture and Word. Lisbon, Portugal.

BONTEMPO, Karen (2012): Interpreting by Design: A Study of Aptitude, Ability and

Achievements in ASL Interpretation. Unpublished doctoral dissertation,

Macquarie University, Australia.

BOSWORTH RG, DOBKINS KR. (2002): Visual field asymmetries for motion

processing in deaf and hearing signers. Brain and Cognition.;49(1):170–181.

BOYES BRAEM, P. (1981): Features of the handshape in ASL.Doctoral dissertation,

Department of Linguistics, University of California, Berkeley.

BOVE, M.G. e VOLTERRA, Virginia, eds. (1984): La Lingua Italiana dei Segni

Insegnamento ed Interpretariato. Roma: Regione Lazio.

BOWKER, Lynne (2001): Towards a methodology for a corpus-based approach to

translation evaluation. Meta, 46(2), 345-364.

BRAUN, A.R. et al. (2001): The neural organization of discourse: an

sub-2-sup-1-sup-5O-PET study of narrative production in English and

American sign language. Brain 124, 2028–2044.

BRENTARI, Diane (2010): Sign Languages. Cambridge: Cambridge University Press.

BROWN, Douglas and ABEYWICKRAMA, Pryanvada (2010): Language assessment:

principles and classroom practices (2 ed.). White Plains, NY: Longman.

BUCCINO, G., BINKOFSKI, F., FINK, G. R., FADIGA, L., FOGASSI, L., GALLESE,

V. et al. (2001): Action observation activates premotor and parietal areas in a

210 somatotopic manner: an fMRI study. Eur.J.Neurosci., 13, 400-404.

BÜHLER, Hildegund (1986): Linguistic (semantic) and extra-linguistic (pragmatic)

criteria for the evaluation of conference interpretation and interpreters,

Multilingua, 5-4, pp. 231-235.

CAMERACANNA, Emanuela and FRANCHI, Maria Luisa (1997a): Considerazioni

sull'interpretariato al termine di un corso per interpreti di LIS. In Maria Cristina

CASELLI and Serena CORAZZA, eds. Studi, esperienze e ricerche sulla

lingua dei Segni in Italia. Atti del 1° Convegno Nazionale sulla Lingua dei

Segni. Trieste 13-15 ottobre 1995, 281- 285.

CAMERACANNA, Emanuela and FRANCHI, Maria Luisa (1997b): Difficoltà di

traduzione in contesti diversi. In Maria Cristina CASELLI and Serena

CORAZZA, eds. Studi, esperienze e ricerche sulla lingua dei Segni in Italia.

Atti del 1° Convegno Nazionale sulla Lingua dei Segni. Trieste 13-15 ottobre

1995, 228-232.

CANLAS, Loida (2006): "Laurent Clerc: Apostle to the Deaf People of the New

World." The Laurent Clerc National Deaf Education Center, Gallaudet

University.

CAPEK, C.M. et al. (2008): Hand and mouth: cortical correlates of lexical processing

in British sign language and speechreading English. J.Cogn. Neurosci. 20,

1220–1234.

CARLI, Francesa, FOLCHI, Anna and ZANCHETTI, Rosanna (2000): Processi di

Interpretazione dei Proverbi. In: Caterina BAGNARA, CHIAPPINI, CONTE

and OTT, eds. Viaggio nella Città Invisibile, Atti del 2 COnvegno Nazionale

sulla LIS. Genova, 25-27 settembre 1998: Pisa: Edizioni del Cerro, 67-71.

CHAN, Marjorie and WANG, Xu (2009): Modality Effects Revisited: Iconicity in

Chinese Sign Language. In: James TAI and Jane TSAY, eds. Taiwan Sign 211

Language and Beyond. Taiwan Institute of Humanities.National Chung Cheng

University.

CHANG, Jung-hsing, SU, Shiou-fen Su and TAI, James (2005): Classifier Predicates

Reanalyzed, with Special Reference to Taiwan Sign Language. In: James

MYERS and James TAI, eds. Taiwan Sign Language. Special Issue of

Language and Linguistics 6.2: 247-278.

CHANG, Jung-Hsing (2009): 語言類型差異與聽障生語言教學之關聯(The relation

of typological differences between Taiwan Sign Language and Chinese to

language teaching of deaf students). 教育資料與研究 90:53-76

CHANG, Jung-hsing and KE, Xiu-ling (2009): 漢語對於臺灣手語地名造詞的影響.

In: James TAI and Jane TSAY, eds. Taiwan Sign Language and Beyond.

Taiwan Institute of Humanities.National Chung Cheng University.

CHAO, Yuping. (ed.) (1997a): Shouyu Da Shi, Vol. 1 Taipei: Xiandai Jingdian Wenhua.

CHAO, Yuping. (ed.) (1997b): Shouyu Da Shi, Vol. 2 Taipei: Xiandai Jingdian Wenhua.

CHAO, Yuping. (ed.) (1997c): Shouyu Da Shi, Vol. 3 Taipei: Xiandai Jingdian Wenhua.

CHAO, Yuping. (ed.) (1999): Shouyu Da Shi, Vol. 4 Taipei: Xiandai Jingdian Wenhua.

CHEE, M.W.L., SOON, C.S., and LEE, H.L. (2003): Common and segregated neuronal

networks for different languages revealed using functional magnetic resonance

adaptation. J. Cognitive Neurosci., 15(1), 85-97.

CHEN, J. (2001): Zhongguo Wenhua Xioucixue (Chinese Rhetoric Culture). Nanjing:

Jiangsu Guji Chubanshe.

CHEN, Yijun and TAI, James (2009a): Lexical Variation and Change in Taiwan Sign

Language. In: Tai, J.; Tsay, J. (eds.). (2009a) Taiwan Sign Language and

Beyond.Taiwan Institute of Humanities.National Chung Cheng University.

CHEN, Yijun and TAI, James (2009b): Lexical variation and change in Taiwan Sign

Language. Taiwan Sign Language and Beyond, James TAI and Jane TSAY, eds.

212 Chia-Yi, Taiwan: The Taiwan Institute for the Humanities, National Chung

Cheng University, 131-148.

CHIARO, Delia and NOCELLA, Giuseppe (2004): Interpreters’ perception of linguistic

and non-linguistic factors affecting quality: A survey through the World Wide

Web. Meta 49 (2), 279-293.

CHIU, Yi-Hsuan, HSIEH, Jen-Chuen, KUO, Wen-Jui, HUNG, Daisy and TZENG, Ovid

(2005): Vision- and Manipulation-based Signs in Taiwan Sign Language. In

James MYERS and James TAI, eds. Taiwan Sign Language. Special Issue of

Language and Linguistics (6.1).

CHIU, Yi-Hsuan (2006): The Role of Iconicity in Sign Language Processing:

Evidence from Taiwan Sign Language, unpublished dissertation.

COHEN, Andrew (1994): Assessing Language Ability in the Classroom. Heinle and

Heinle.

COKELY, Dennis (2003): Interpretazione: un modello sociolinguistico. Roma: Edizioni

Kappa.

COKELY, Dennis (2005): Shifting Positionality: A Critical Examination of the Turning

Point in the Relationship of Interpreters and the Deaf Community. In Marc

MARSCHARK, Rico PETERSON, Elizabeth WINSTON, eds. Sign Language

Interpreting and Interpreter Education. Directions for Research and Practice.

Oxford University Press.

COLINA, Sonia (2009): Further evidence for a functional approach to translation

quality evaluation.Target, 21(2), 235-264.

COLLADOS AIS, Angela (1998): La evaluacion de la calidad en interpretacion

simultanea. La importancia de la comunicacion no verbal. Granada:Editorial

Comares.

COLSTON, H.L. (1995): Actions speak louder than words: Understanding 213

figurative proverbs. Unpublished doctoral disseratation, University of

California, Santa Cruz.

COOPER, J.M.; and HUTCHINSON, D.S. (1997): Plato complete works.

Indianapolis, in: Hackett.

CORBALLIS, M. C. (2002): Did language evolve from manual gestures? In A.

Wray (ed.), The Transition to Language. Oxford: Oxford University Press,

pp. 161-179.

CORINA, D.P., POIZNER, H., BELLUGI, U., FEINBERG, T., DOWD, D. And

O’GRADY-BATCH, L. (1992): Dissociation between Linguistic and

Nonlinguistic Gestural Systems: a Case for Compositionality. Brain Lang.,

43(3), 414-447.

CORINA, D.P. (1998): Aphasia in users of signed languages. In Aphasia in atypical

populations (Coppens, P. et al., eds), pp. 261–309, Erlbaum.

CORINA, D. P., MCBURNEY, S. L., DODRILL, C., HINSHAW, K. BRINKLEY, J.

and OJEMANN, G. (1999): Functional roles of Broca's area and SMG:

evidence from cortical stimulation mapping in a deaf signer. Neuroimage,

10(5), 570-81.

CORINA, D.P. et al. (2003): Language lateralization in a bimanual language. J. Cogn.

Neurosci. 15, 718–730.

CORINA, D. et al. (2007): Neural correlates of human action observation in hearing

and deaf subjects. Brain Res. 1152, 111–129.

CUI, X.- L. (1997): Hanyu shouyu yu zhongguo renwen shijie (Chinese Set Phrases

and the World of Chinese Culture). Beijing: Beijing Language and Culture

University.

DARÒ, V. and FABBRO, F. (1994): Verbal memory during simultaneous

interpretation: Effects of phonological interference. Applied Linguistics, 15

214 (1), 365-381.

DAVIS, Jeffrey (2005): Code Choices and Consequences: Implications for Educational

Interpreting. In Marc MARSCHARK, Rico PETERSON, and Elizabeth

WINSTON, eds. Sign Language Interpreting and Interpreter Education.

Directions for Research and Practice.Oxford University Press.

DEJEAN LE FEAL, Karla (1990): Some Thoughts on the Evaluation of Simultaneous

Interpretation”. In: Davia MOWEN and Margareta BOWEN, eds. Interpreting,

Yesterday, Today and Tomorrow. Binghampton, New York: SUNY, 154-160.

DEL VECCHIO, Silvia and FRANCHI, Maria Luisa (1997): Strategie di traduzione

durante l'esposizione di materiale visivo. In: Maria Cristina CASELLI and

Serena CORAZZA, eds. Studi, esperienze e ricerche sulla lingua dei Segni in

Italia. Atti del 1° Convegno Nazionale sulla Lingua dei Segni. Trieste 13-15

ottobre 1995, 276-280.

DENZIN, Norman K. and LINCOLN, Yvonna S. (Eds.) (2005): The Sage Handbook of

Qualitative Research (3rd ed.). Thousand Oaks, CA: Sage

DÖRTE, Andres (2000): Konsekutivdolmetschen und Notizen.

EmpirischeUntersuchungMentalerProzesseneiAnfaengern in der

Dolmetscherausbildung und ProfessionellenDolmetschern, thesis, unpublished.

Vienna: University of Vienna.

DUNCAN, Susan (2005): Gesture in Signing: A Case Study from Taiwan Sign

Language. In: James MYERS and James TAI, eds. Taiwan Sign Language.

Special Issue of Language and Linguistics (6.1).

ELMER, S., MEYER, M., JANCKE, L. (2010): Simultaneous interpreters as a model

for neural adaptation in the domain of language processing. Brain

research,1317, Elsevier B.V., 147-156.

EMMOREY, K. (2002): Language, cognition and the brain: Insights from sign language 215

research. Mahwah, NJ: Lawrence Erlbaum Associates.

EMMOREY, K., DAMASIO, H., MCCULLOUGH, S., GRABOWSKI, T., PONTO,

L.L., HICHWA, R.D. and BELLUGI, U. (2002): Neural systems underlying

spatial language in American Sign Language. Neuroimage, 17(2), 812-824.

EMMOREY, K., ALLEN, J. S., BRUSS, J., SCHENKER, N.and DAMASIO, H. (2003):

A morphometric analysis of auditory brain regions in congenitally deaf adults.

P. Natl. Acad. Sci. USA., 100(17), 10049-10054.

EMMOREY, K., BORINSTEIN, H.B. and R.L. THOMPSON (2004): Bimodal

bilingualism: Code-n lending between English and American Sign Language,

in Processings of the forth international symposium on bilingualism, Edited by

J. Cohen, et al., Cascadilla Press, Somerville, MA.

EMMOREY, K. et al. (2005): The neural correlates of spatial language in English and

American sign language: a PET study with hearing bilinguals. Neuroimage

24, 832–840.

EMMOREY, K. et al. (2007): The neural correlates of sign versus word production.

Neuroimage 36, 202–208.

EMMOREY, K. and MCCULLOUGH, Stephen (2009): the bimodal bilingual brain:

effects of sign language experience. Brain Langyage 109(2-3): 124-132.

EVERHART, V.S. and MARSCHARK, M. (1988): Linguistic flexibility in signed and

written language productions of the Deaf children.Journal of Experimental

Child Psychology, 46, 174-193.

FABBRO, F., GRAN, L. (1997): “Neurolinguistic Research in Simultaneous

Interpretation”, in Gambier, Y. et al. (eds). Conference Interpreting: Current

Trends in Research: Proceedings of the International Conference on

Interpreting: What do we know and how?. John Benjamins Publishing

Company.

216 FABBRO, F., PARADIS, M. (1995): Differential impairments in four multilingual

patients with subcortical lesions. In: M. Paradis (ed). Aspects of bilingual

aphasia. Oxford: Perganion Press, 139-76.

FADIGA, L., FOGASSI, L., PAVESI, G., and RIZZOLATTI, G. (1995): Motor

facilitation during action observation: a magnetic stimulation study.

J.Neurophysiol., 73, 2608-2611.

FERGUSON, Charles (1959): "Diglossia". Word 15: 325–340.

FISCHER, Susan et al. (2010): Variation in East Asian Sign Language Structures. Sign

Languages, p. 501.

FORESTAL, Eileen (2005): The Emerging Professionals: Deaf Interpreters and Their

Views and Experiences on Training. In: Marc MARSCHARK, Rico

PETERSON and Elizabeth WINSTON, eds. Sign Language Interpreting and

Interpreter Education. Directions for Research and Practice. Oxford

University Press.

FRANCHI, Maria Luisa (1992): Il ruolo dell'interpretariato nella scuola e nella società.

In Atti del Convegno Internazionale Aspetti Sociali della Sordità. Roma: Univ.

di Roma La Sapienza - Ist. di Clinica O.R.L. e in corso di stampa in Sbalordire

e il Sordoudente.

FRANCHI, Maria Luisa (1993): Il ruolo dell'interpretariato nella scuola e nella società.

Voci Silenzi Pensieri, a.6, n. 11, 9-12.

FRAUENFELDER, U., SCHRIEFERS, H. (1997): “A psycholinguistic perspective on

Simultaneous Interpretation.” Interpreting 2:1-2. 55-89.

FRIEDMAN, L. (Ed.) (1977): On the other hand: New Perspectives on American

Sign Language. New York: Academic Press.

FRISHBERG, N. (1986): Interpreting: An Introduction. Silver Spring, MD: Registry

of Interpreters for the Deaf. 217

GALLAUDET, Edward Miner (1888): "Life of Thomas Hopkins Gallaudet –

Founder of Deaf-Mute Instruction in America" by Edward Miner Gallaudet.

For information about Laurent Clerc, see pp. 92 and following. (Download

book: http://saveourdeafschools.org/life_of_thomas_hopkins_gallaudet.pdf)

GALLESE, V., FADIGA, L., FOGASSI, L., and RIZZOLATTI, G. (1996): Action

recognition in the premotor cortex. Brain, 119 ( Pt 2), 593-609.

GAO, G.-L. (Ed.) (1987): Fenlei shiyong chengyu cidian (Classification Dictionary

of Practical Chengyu). Jilin: Jilin Wenshi Publishing House.

GASTALDI, G. (1951): Osservazioni su un afasico bilingue. Sistema Nervoso 2, pp.

175–180.

GIBBS, R.W. and Beitel, D. (1995): What proverb understanding reveals about how

people think. Psychological Bullettin, 118, 133-154.

GIBBS, R.W. (1994). The poetics of mind. Cambridge, England: Cambridge

University Press.

GIBBS, R.W. (1996): Metaphor as a Constraint on Text Understanding.In B.K.

Britton and A.C. Graesser (Eds.), Models of understanding text (pp.

215-240). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

GIBBS, R.W., COLSTON, H.L.; JOHNSON, M.d. (1996): Proverbs and the

metaphorical mind.Metaphor and Symbolic Activity, 11, 207-216.

GILE, Daniel (1989): La communication linguistique en réunion multilingue Les

difficulties de la transmission informationnelle en interprétation simultanée.

Unpublished doctoral dissertation, Université Paris III.

GILE, Daniel (1990): L’evaluation de la qualite de l’interpretation par les delegues: une

etude de cas. The Interpreters’ Newsletter, 3, 66-71.

GILE, D. (1995): Basic Concepts and Models for Interpreter and Translator Training.

Amsterdam/Philadelphia: John Benjamins.

218 GILE, D. (2000): Issues in interdisciplinary research into conference interpreting. In

Birgitta Englund Dimitrova and Kenneth Hyltenstam (Eds.), Language

Processing and Simultaneous Interpreting: Interdisciplinary Perspectives.

GRAN, Laura and BIDOLI, Cynthia Kellet (2000): L’interpretazione nelle lingue dei

segni: aspetti teorici e pratici della formazione. Trieste: Edizioni Universita’

degli Studi di Trieste.

GREEN, D.W. (1986): Control, activation, and resource: a framework and a model for

the control of speech in bilinguals. Brain and Language, 27, 210-223.

GREZES, J., ARMONY, J. L., ROWE, J., and PASSINGHAM, R. E. (2003):

Activations related to "mirror" and "canonical" neurones in the human brain:

an fMRI study. Neuroimage., 18, 928-937.

GÜREL, A. (2004). Selectivity in L2-induced L1 attrition: a psycholinguistic account.

Journal of Neurolinguistics, 17: 53-87.

HALL, Edward (1966): The Hidden Dimension. Anchor Books.

HARRIS, Roy (1988): Language, Saussure and Wittgenstein. Routledge.

HICKOK, G. et al. (1996): The neurobiology of sign language and its implications for

the neural basis of language. Nature 381,699–702.

HICKOK, G. et al. (1998): The neural organization of language: evidence from sign

language aphasia. Trends Cogn. Sci. 2, 129–136.

HOUSE, Juliane (2001): TranslationQuality Assessment: Linguistic Description versus

Social Evaluation', Meta, XLVI(2): 243-57.

HU, Y.-S. (1992): Xiandai Hanyu (Modern Chinese). Taipei: Xin Wenfeng Publishing

House.

HUANG, L.-L. (1982): “Dangdai changyong sizi chengyu yanjiu” (A Study on Modern

Frequently-used and four-character Chengyu). Academic Journal of Taiwan

Sport University (3), 148. 219

HUTESON, Greg (2003): Report on Social, Educational, and Sociolinguistic Issues that

Impact the Deaf and Hard of Hearing Population of Taiwan, SIL International.

JAKOBSON, R. (1971): Selected Writings, vol. 2: Word and language. Stephen Rudy

(ed.). Berlin and New York: Mouton de Gruyter.

KLEIN, D., ZATORRE, R.J., MILNER, B., MEYER, E., EVANS, A.C. (1995): The

neural substrates of bilingual language processing: evidence from positron

emission tomography. In M. Paradis (Ed.), Aspects of Bilingual Aphasia

(23-36), Oxford, UK: Pergamon Press.

KÖPKE, B. (2002): Activation thresholds and non-pathological first language attrition.

In F. Fabbro (Ed.), Advances in the neurolinguistics of bilingualism. (pp.

119-142). Udine: Forum.

INMAN, P.R. andLIAN, M.J. (1991): Conservation and metaphor performance

among children with hearing impairments.Journal of the American

Deafness and Rehabilitation Association, 25, 28-41.

IRAN-NEJAD, A. andORTONY, A. (1981): The comprehension of metaphorical

uses of English by Deaf children.Journal of Speech and Hearing Research,

24, 551-556.

ITTYERAH, M. andMITRA, D. (1988): Synesthetic perception in the sensorily

deprived.Psychological Studiers, 33, 110-115.

JANZEN, T. and SHAFFER, B. (2002): Gesture as the substrate in the process of

ASL grammaticization. In Modality and Structure in Signed and Spoken

Language (Meier, R. et al., eds), pp. 199–223, Cambridge University Press.

JEAN, Ann (2005): A Functional Explanation of Taiwan Sign Language Handshape

Frequency. In: James MYERS and James TAI, eds. Taiwan Sign Language.

Special Issue of Language and Linguistics (6.1).

JENKINS, R.; BOWEN, L. (1994): Facilitating development of preliterate children's

220 phonological abilities. Topics in Language Disorders, v. 14, n. 2, p.

26-39.

JOHNSON, M. (1987): The body in the mind: The bodily basis of meaning,

imagination, and reason. Chicago: University of Chicago Press.

JOHNSTON, Trevor A. (1989): : The Sign Language of the Australian Deaf

community. The University of Sydney: unpublished Ph.D. dissertation.

KASSUBEK, J., HICKOK, G., and ERHARD, P. (2004): Involvement of classical

anterior and posterior language areas in sign language production, as

investigated by 4 T functional magnetic resonance imaging. Neurosci. Lett.,

364(3), 168-172.

KEGL, J., A. SENGHAS and M. COPPOLA (1998): Creation through Contact: Sign

language emergence and sign language change in Nicaragua. In M. DeGraff

(ed.), Language Creation and Change: Creolization, Diachrony and

Development. Cambridge, MA: MIT Press.

KIMURA, Doreen (1993): Neuromotor Mechanisms in Human Communication.

Oxford: Oxford University Press.

KISOR, Henry (1991): What's That Pig Outdoors?: A Memoir of Deafness. Large

Print.

KLIMA, Edward S. and BELLUGI, Ursula (1979): The signs of language.

Cambridge, MA: Harvard University Press.

KLIMA, Edward; BELLUGI, Ursula (1988): The Signs of Language (Paperback ed.)

Cambridge, MA: Harvard University Press.

KLUWIN, T. (1981): The grammaticality of manual representation of English in

classroom settings. American Annals of the Deaf, 126, 417-421.

KNIGHT, C. (1998): Ritual/speech coevolution: a solution to the problem of

deception. In J. R. Hurford, M. Studdert-Kennedy and C. Knight (eds), 221

Approaches to the Evolution of Language: Social and cognitive bases.

Cambridge: Cambridge University Press, pp. 68-91.

KNIGHT, C. (2000): Play as precursor of phonology and syntax. In Knight, C., M.

Studdert-Kennedy and J. R. Hurford (eds), 2000. The Evolutionary

Emergence of Language. Social function and the origins of linguistic form.

Cambridge: Cambridge University Press, pp. 99-119.

KNIGHT, C. (2008): Language co-evolved with the rule of law. Mind and Society

7(1): 109-128.

KNIGHT, C. (2008b): 'Honest fakes' and language origins. Journal of

Consciousness Studies, 15(10-11), pp. 236-48.

KOHLER, E., KEYSERS, C., UMILTA, M. A., FOGASSI, L., GALLESE, V., and

RIZZOLATTI, G. (2002): Hearing sounds, understanding actions: action

representation in mirror neurons. Science, 297, 846-848.

KOLB, Bryan, and WHISHAW, Ian Q. (2003): Fundamentals of Human

Neuropsychology (5th ed.). Worth Publishers.

KOPCZYNSKY, Adrian (1994): Quality in Conference Interpreting: some pragmatic

problems. In: Mary SNELL-HORNBY, Franz POECHHAKER and Klaus

KAINDL, eds. Translation Studies: an Interdiscipline. Amsterdam and

Philadelphia, John Benjamins, 189-198.

KOTLER, Philip and ARMSTRONG, Gary (1994): Principles of Marketing. 6th Ed.,

Englewood Cliffs (NJ), Prentice-Hall.

KOVELMAN, I., SHALINSKY, WHITE, S., SCHMITT, M. S., BERENS, PAYMER

and PETITTO, L.A. (2009): Dual language use in sign-speech bimodal

bilinguals: fNIRS brain-imaging evidence. Brain. Lang., 109(2-3), 112-23.

KRAMER, A. and Buck, L.A. (1976): Poetic creativity in deaf children. American

Annals of the Deaf, 121, 31-36.

222 KURZ, Ingrid (1989): Conference Interpreting: User Expectations. In: HAMMOND, ed.

Coming of Age. Proceedings of the 30th Annual Conference of the Amercian

Translators Association.Medford (NJ), Learned Information, 143-148.

KURZ, Ingrid (1993): Conference Interpretation: Expectation of Different User

Groups”, The Interpreters’ Newsletter, 5, 13-21.

KURZ, Ingrid (1994): What Do Different User Groups Expect from a Conference

Interpreter? The Jerome Quarterly, 9-2, 3-7.

KURZ, Ingrid (1996): SimultandolmetschenalsGegenstand der Interdisziplinaeren

Forschung. Vienna, WUV-Universtitaetsverlag.

KURZ, Ingrid (2001): Conference Interpreting: Quality in the Ear of the Users. Meta:

Translators’ Journal, vol. 46, n. 2, 2001, 394-409.

LAKOFF, G. (1987): Women, fire, and dangerous things: What categories reveal

about the mind. Chicago: University of Chicago Press.

LAKOFF, G.; JOHNSON, M. (1980): Metaphors we live by. Chicago: University of

Chicago Press.

LEE, Hsin-Hsien, TSAY, Jane and MYERS, James (2001): Handshape Articulation in

Taiwan Sign Language and Signed Chinese. Paper presented at the Conference

on Sign Linguistics, Deaf Education and Deaf Culture in Asia, Chinese

University of Hong Kong.

LEE, Robert (2005): The Research Gap: Getting Linguistic Information into the Right

Hands – Implications for the Deaf Education and Interpreting. In: Marc

MARSCHARK, Rico PETERSON, and Elizabeth WINSTON, eds. Sign

Language Interpreting and Interpreter Education. Directions for Research and

Practice. Oxford University Press.

LI, D (2006): Making translation testing more teaching oriented: a case study of

translation testing in China. Meta 46(2), 311-325. 223

MACK, Gabriel and CATTARUZZA, Lorella (1995): User Surveys in Simultaneous

Interpretation: A Means of Learning About Quality and/or Raising Some

Reasonable Doubts”, Topics in Interpreting Research, Tommola, ed. Turku,

University of Turku, 51-68.

MAC CORMAC, E.R. (1995): A cognitive theory of metaphor. Cambridge, MA:

MIT Press.

MACSWEENEY, M. et al. (2002): Neural systems underlying British Sign Language

and audio-visual English processing in native users. Brain 125, 1583–1593.

MACSWEENEY, M. et al. (2002b): Neural correlates of British Sign Language

comprehension: spatial processing demands of topographic language. J.

Cogn. Neurosci. 14, 1064–1075.

MACSWEENEY, M. et al. (2004): Dissociating linguistic and nonlinguistic gestural

communication in the brain. Neuroimage 22, 1605–1618.

MACSWEENEY, M. et al. (2006): Lexical and sentential processing in British Sign

Language. Hum. Brain Mapp. 27, 63–76.

MACSWEENEY, M., CHERYL, M.C., CAMPBELL, R., WOLL, B. (2008): “The

Signing Brain: the Neurobiology of Sign Language”. Trends Cogn Sci.

Nov;12(11):432-40.

MACSWEENEY, M. et al. (2008b): Phonological processing in deaf signers and the

impact of age of first language acquisition. Neuroimage 40,1369–1379.

MCCULLOUGH, S. et al. (2005): Neural organization for recognition of grammatical

and emotional facial expressions in deaf ASL signers and hearing nonsigners.

Brain Res. Cogn. Brain Res. 22, 193–203.

MCGURK, H. and MACDONALD, J. (1976): "Hearing lips and seeing voices,"

Nature, Vol 264(5588), pp. 746–748.

MCGUIRE, P.K. et al. (1997): Neural correlates of thinking in sign language.

224 Neuroreport 8, 695–698.

MAEDA, F., KLEINER-FISMAN, G., and PASCUAL-LEONE, A. (2002): Motor facilitation while observing hand actions: specificity of the effect and role of observer's orientation. J.Neurophysiol., 87, 1329-1335.

MANDEL, M. (1977): Iconic Devices in American Sign Language.In Lynn A.

Friedman, (ed.), On The Other Hand, pp. 57-107. London: Academic Press.

MARMOR, G. and PETTITO, L. (1979): Simultaneous communication in the

classroom: How well is English grammar represented? Sign Language

Studies, 23, 99-136.

MARRONE, Stefano (1993): Quality: A Shared Objective”, The Interpreters’

Newsletter, 5, 35-41.

MARSHALL, J. et al. (2004): Aphasia in a user of British Sign Language:

dissociation between sign and gesture. Cogn. Neuropsychol. 21, 537–554.

MARSCHARK, Marc, PETERSON, Rico and WINSTON, Elizabeth (2005a): Sign

Language Interpreting and Interpreter Education. Directions for Research and

Practice.Oxford University Press.

MARSCHARK, Marc, SAPERE, Patricia, CONVERTINO, Carol and SEEWAGEN,

Rosemarie (2005b): Educational Interpreting: Access and Outcomes. In Marc

MARSCHARK, Rico PETERSON and Elizabeth WINSTON, eds. Sign

Language Interpreting and Interpreter Education. Directions for Research and

Practice.Oxford University Press.

MARSCHARK, M. (2005): Metaphors in sign language and sign language users: A

window into relations of language and thought. In Herbert L. Colston and

Albert N. Katz (Eds.), Figurative language comprehension:Social and

cultural influences (pp. 309-334). Mahwah, NJ: Lawrence Erlbaum

Associates. 225

MARSCHARK, M., andWEST, S.A. (1985): Creative language abilities of Deaf

children.Journal of Speech and Hearing Research, 38, 73-78.

MARSCHARK, M., EVERHART, V.S., andDEMPSEY, P.R. (1991): Nonliteral

content in language productions of Deaf, hearing, and native-signing

hearing mothers.Merrill-Palmer Quarterly, 37, 305-323.

MARSCHARK, Marc, PETERSON, Rico and WINSTON, Elizabeth, eds. (2005): Sign

Language Interpreting and Interpreter Education. Directions for Research and

Practice. Oxford University Press.

MARSCHARK, M. (2005c): Metaphors in sign language and sign language users: A

window into relations of language and thought. In Herbert L. COlston and

Labert N. Katz (Eds.), Figurative language comprehension: Social and cultural

influences (pp. 309-334). Mahwah, NJ: Lawrence Erlbaum Associates.

MARSCHARK, M., West, S.A., Nall, L. and Everhart, V.S. (1986): Development of

creative language devices in sign and oral production.Journal of Experimental

Child Psychology, 41, 534-550.

MAYBERRY, R.I. et al. (2002): Linguistic ability and early language exposure.

Nature 417, 38.

MAYBERRY, R.I. and LOCK, E. (2003): Age constraints on first versus second

language acquisition: evidence for linguistic plasticity and epigenesis. Brain

Lang. 87, 369–383.

MAYBERRY, R.I. (2007): When timing is everything: age of first-language

acquisition effects on second-language learning. Appl. Psycholinguist.28,

537–549.

MEAK, Lidia (1990): Interpretation simultanee et congres medical: attentes et

commentaires, The Interpreters’ Newsletter, 3,8-13.

MEYER, M. et al. (2007): Neuroplasticity of sign language: implications from

226 structural and functional brain imaging. Restor. Neurol. Neurosci. 25,

335–351.

MONIKOWSKI, Christine and PETERSON, Rico (2005): Service Learning in

Interpreting Education: Living and Learning.

MITCHELL, S. (2004): Gilgamesh: A New English Version. New York: Free Press.

MONIKOWSKI, C., PETERSON, R. (2005): Service Learning in Interpreting

Education: Living and Learning. In Marschark, M., Peterson, R., Winston,

E.A. (eds.). Sign Language Interpreting and Interpreter

Education.Directions for Research and Practice.Oxford University Press.

MORATTO, Riccardo (2010): Chinese to Italian Interpreting of Chengyu.

Intralinea.,Vol. 12.

MORATTO, Riccardo and CHEN, Sheng-Jie (2011a): The Bologna University Model

of Conference Interpreter Training: a Cross-cultural Perspective, published in

the proceedings of 2010 年跨文化研究國際學術研討會.

MORATTO, Riccardo. (2011b): Theory and reflection: a tentative exploration into the

application of Nord's concept of adequacy in trainee interpreter's (TI)

performance assessment. Studies of Translation and Interpretation, Vol. 14,

93-112.

MORENO, E.M., FEDERMEIER, K.D., KUTAS, M. (2002): Switching languages,

switching palabras (words): an electrophysiological study of code-switching.

Brain and Language, 80(2), 188-207.

MOSER, Peter (1995): Simultanes Konferenzdolmetschen. Anforderungen und

Erwartungen der Benutzer. EndberichtimAuftrag Von AIIC, Vienna, SRZ Stadt

und Regionalforschung GmbH.

MOSER-MERCER, Barbara (1996): Quality in Interpreting: Some Methodological

Issues. The Interpreters’ Newsletter, 7, 43-55 227

MYERS, James and TSAY, Jane (2006): The Relative Efficiency of Taiwan Sign

Language and (Signed) Chinese. Paper presented at the First International

Conference of Comparative Study of East Asian Sign Languages. September

16-17, 2006. Chung Cheng University, Chiayi, Taiwan.

MYERS, James, and TSAY, Jane (2004):The Morphology and Phonology of Taiwan

Sign Language. Paper presented at the Linguistics Society of Taiwan 2004

Tutorial Workshop. Taiwan Normal University, Taipei, Taiwan.

MYERS, James and TAI, James (2005): Taiwan Sign Language. Special Issue of

Language and Linguistics (6.1).

MYERS, J.LEE, H. and TSAY, J. (200b5): Phonological Production in Taiwan Sign

Language. In Myers, James; Tai, James (eds.).Taiwan Sign

Language.Special Issue of Language and Linguistics (6.1).

MYERS. James, LEE, Hsin-hsien and TSAY, Jane (2005): Phonological Production in

Taiwan Sign Language.In: James MYERS and James TAI, eds. Taiwan Sign

Language. Special Issue of Language and Linguistics (6.1).

NAPIER, Jemina (2005): Linguistic Features and Strategies of Interpreting: From

Research to Education to Practice. In: Marc MARSCHARK, Rico PETERSON

and Elizabeth WINSTON, eds. Sign Language Interpreting and Interpreter

Education. Directions for Research and Practice.Oxford University Press.

NATH, A.R. and BEAUCHAMP, M.S. (2011): A neural basis for interindividual

differences in the McGurk effect, a multisensory speech illusion. NeuroImage,

59(1), 781-787.

NEVILLE, H. J., and BELLUGI, U. (1978): “Patterns of cerebral Specialization in

Congenitally Deaf Adults: a Preliminary Report”. In Understanding Language

through Sign Language Research, ed. Patricia Siple. New York: Academic

Press.

228 NEVILLE, H.J. (1988): “Cerebral Organization for Spatial Attention”. In Sptial

Cognition: Brain Bases and Development, ed. J. Stiles-Davis, M. Kritchevsky,

and U. Bellugi. Hillsdal, N.J.: Hove, and London: Lawrence J. Erlbaum.

NEVILLE, H.J. (1989): “Neurobiology of Cognitive and Language Processing: Effects

of Early Experience”. In Brain Maturation and Behavioral Development, ed. K.

Gibson and A.C. Petersen. Hawthron, N.Y.: Aldine Gruyter Press.

NEVILLE, H.J. et al. (1998): Cerebral organization for language in deaf and hearing

subjects: biological constraints and effects of experience. Proc. Natl. Acad.

Sci. U. S. A. 95, 922–929.

NEVILLE HJ, LAWSON D. (1987): Attention to central and peripheral visual space

in a movement detection task: An event-related potential and behavioral

study. II. Congenitally deaf adults. Brain Research;405(2):268–283.

NEWMAN, A. J., et al. (2002): "A Critical Period for Right Hemisphere Recruitment in

American Sign Language Processing". Nature Neuroscience 5 (1): 76–80.

NG, Bee Chin (1992): End Users’ Subjective Reaction to the Performance of Student

Interpreters. The Interpreters’ Newsletter Special Issue I, 35-41

NORD, Christine (1989): LoyalitaetstattTreue, LebendeSprachen 34(3): 100-105.

NORD, Christine (1997): Translating as a Purposeful Activity: Functionalist

Approaches Explained. Shanghai Foreign Language University Press.”

O’BRIEN, J. (1990): Metaphoricity in the Signs of American Sign Language.

Metaphor and Symbol, 14(3), 159-177.

O’BRIEN, J. (1993): Metaphorical knowledge in understanding caused motion

expressions. Unpublished doctoral dissertation, University of California,

Santa Cruz.

OSTRANDER, C. (1998): "The Relationship Between Phonological Coding And

Reading Achievement In Deaf Children: Is Cued Speech A Special Case?" 229

http://web.syr.edu/~clostran/literacy.html (accessed on August 2012)

PABLO BONET, J. de (1620): Reduction de las letras y Arte para enseñar á ablar

los Mudos. Ed. Abarca de Angulo, Madrid.

PARADIS, M. (1985): On the representation of two languages in one brain.

Language Sciences, 7:1-39.

PARADIS, M. (1993): Linguistic, psycholinguistic, and neurolinguistic aspects of

"interference" in bilingual speakers: the Activation Threshold

Hypothesis. International Journal of Psycholinguistics. 9: 133-145.

PARADIS, M. (1997): The cognitive neuropsychology of bilingualism. In de Groot,

A. M. B. and Kroll, J. F. (Eds.), Tutorials In Bilingualism: Psycholinguistic

Perspectives. (pp. 331-354). Mahwah, NJ: Lawrence Erlbaum Publishers.

PETITTO, L. A., ZATORRE, R. J., GAUNA, K., NIKELSKI, E. J., DOSTIE, D.

and EVANS, A. C. (2000): Speech-like cerebral activity in profoundly deaf

people processing signed languages: implications for the neural basis of

human language. P. Natl. Acad. Sci. USA., 97(25), 13961-6.

PFAFF, K.L., GIBBS, R.W., and JOHNSON, M.D. (1997): Metaphors in using and

understanding euphemism and dysphemism.Applied Psycholinguistics, 18,

59-83.

PFAFF, K.L., GIBBS, R.W., andJOHNSON, M.D. (1997): Metaphors in using and

understanding euphemism and dysphemism.Applied Psycholinguistics, 18,

59-83.

PFAU, Roland, STEINBACH, Markus, WOLL, Benice, eds. (2011): Sign language: An

international handbook. Berlin and New York: Mouton de Gruyter.

POIZNER, H., KLIMA, E.S., and BELLUGI, U. (1987): What the hands reveal about

the brain, MIT Press.

PREMACK, David and PREMACK, Ann James (2000): The Mind of an Ape. W. W.

230 Norton and Company.

PRICE, C.J., GREEN, D.W., VON STUDNITZ, R. (1999): A functional imaging study

of translation and language switching. Brain. 122: 2221-35.

PROVERBIO, A.M., LEONI, G., ZANI, A. (2004): Language switching mechanisms in

simultaneous interpreters: an ERP study. Neuropsychologia, 42(12),

1636-1656.

PROVERBIO, A.M., ADORNI, R., ZANI, A. (2008): Inferring native language from

early bio-electrical activity. In: Biological Psychology, Volume 80, Issue 1,

52-63.

QUINTO-POZOS, David (2005): Factors that Influence the Acquisition of ASL for

Interpreting Students. In: Marc MARSCHARK, Rico PETERSON and

Elizabeth WINSTON, eds. Sign Language Interpreting and Interpreter

Education. Directions for Research and Practice.Oxford University Press.

REISS, Katharina (1979): Translation Criticisms: the potentials and limitations.

Manchester: St. Jerome. Translated in 2000.

RINNE, J.O., TOMMOLA, J., LAINE, M., KRAUSE, B.J., SCHMIDT, D.,

KAASINEN, V., TERAS, M., SIPILA, H., SUNNARI, M. (2000): “The

translating brain: cerebral activation patterns during simultaneous interpreting.”

Neuroscience letters, Volume 294, Issue 2, 85-88.

RITTENHOUSE, R.K., andSTEARNS, K. (1990): Figurative language and reading

comprehension in American deaf and hard-of-hearing children: Textual

interactions. British Journal of Disorders of Communication, 25, 360-374.

RITTENHOUSE, R.K., KENYON, P.L., and HEALY, S. (1994): Auditory

specialization in Deaf children.American Annals of the Deaf, 139, 80-85.

RIZZOLATTI, G. and ARBIB, M. A. (1998): Language within our grasp. Trends

Neurosci., 21, 188-194. 231

RUSSO, T. (1997): Iconicità e metafora nella LIS”, in Filosofia del Linguaggio.

Teoria e Storia, Unical,. 1997, pp. 136-141.

SACKS, Oliver (1989): Seeing Voices. A Journey into the World of the Deaf.New York:

Vintage Books.

SAKAI, K.L. et al. (2005): Sign and speech: amodal commonality in left hemisphere

dominance for comprehension of sentences. Brain 128, 1407–1417.

SALA, Rita (2005): L’interprete di lingua dei segni: orecchio per i sordi e voce per gli

udenti. Unpublished Master Thesis.Padova: UniversitàdegliStudi di Padova.

SANDLER, Wendy; and LILLO-MARTIN, Diane (2006): Sign Language and

Linguistic Universals. Cambridge: Cambridge University Press.

SASAKI, Daisuke (2007): Comparing the lexicons of Japanese Sign Language and

Taiwan Sign Language: a preliminary study focusing on the difference in the

handshape parameter,.In: David Quinto-Pozos, ed. Sign Language in Contact:

Sociolinguistics in Deaf Communities. Washington, D.C.: Gallaudet University

Press.

SCHIRMER, Barbara R. (2000): Language and Literacy Development in Children who

are Deaf. 2nd ed. Boston: Allyn and Bacon.

SCHWARTZ, Sue (1996): Choices in Deafness: A Parent's Guide to Communication

Options. 2nd ed. MD: Woodbine House.

SELESKOVITCH, D. (1968): L’interprète dans les conferences internationales

Problèmes de langage et de communication. Paris, Minard.

SELESKOVITCH, Danica (1986): Who Should Assess an Interpreter’s Performance?

Multilingua, 5-4, p. 236.

SHIH, Wen-han and TING, Li-fen, ed. (1999): ShouNeng Sheng Ch'iao Vol. I (13th ed.

ed.). Taipei: National Association of the Deaf in the Republic of China.

SIPLE, P. (Ed.). (1978): Understanding language through sign language research.

232 New York: Academic Press.

SMITH, W. H. and TING, L.-F. (1979): Shou Neng Sheng Chyau [Your hands can

become a bridge]: Sign Language Manual. (vols. 1,2) Taipei, Taiwan: Deaf

Sign Language Research Association of the Republic of China

SMITH, W.H. (1989): The Morphological Characteristics of Verbs in Taiwan Sign

Language. Bloomington: Indiana University Dissertation.

SMITH, Wayne (2005): Taiwan Sign Language Research: An Historical Overview. In:

James MYERS and James TAI, eds. Taiwan Sign Language. Special Issue of

Language and Linguistics (6.1).

SOEDERFELDT, B. et al. (1997): Signed and spoken language perceptionstudied by

positron emission tomography. Neurology 49, 82–87.

STOCCHERO, Ilario (1991): Il servizio di interpretariato per i sordi. Problemi e

prospettive. Scuola e Città. Vol. 42, 7, 324-329.

STOCCHERO, Ilario (1995): L'interprete come intermediario tra sordi e udenti.

Sociologiadellacomunicazione, 20, 61-66.

STOKOE, William (1960): Sign language structure: An outline of the visual

communication systems of the American deaf. Studies in linguistics:

Occasional papers (No. 8). Buffalo: Dept. of Anthropology and Linguistics,

University of Buffalo.

STOKOE, William (1976): Dictionary of American Sign Language on Linguistic

Principles. Linstok Press.

STONE, Patrick. (1997): "The Art of Teaching: Children Who are Deaf and Hard of

Hearing." The Council for Exceptional Children. ERIC Clearinghouse on

Disabilities and Gifted Education #E551.

SU, Shiou-fen and TAI, James (2006): "Word Order in Taiwan Sign Language"

Proceedings of the First International Conference of Comparative Study of 233

East Asian Sign Languages. Pp. 153-163.

SU, Shiou-fen and TAI, James (2009): Lexical Comparison of Signs from Taiwan,

Chinese, Japanese, and American Sign Language: Taking Iconicity into

Account. In: James TAI and Jane TSAY, eds. Taiwan Sign Language and

Beyond. Taiwan Institute of Humanities.National Chung Cheng University.

SU, Shiou-fen and TAI, James (2007): Encoding Motion Events in Taiwan Sign

Language and Mandarin Chinese: Some Typological Implications. Paper

presented at the Second International Conference of the French Association for

Cognitive Linguistics, University of Lille, France. May 10-12, 2007.

SU, Shiou-fen and TAI, James (2006): Word Order in Taiwan Sign Language

Proceedings of the First International Conference of Comparative Study of East

Asian Sign Languages, 153-163.

SUTTON-SPENCE, R. and WOLL, B. (1999): The Linguistics of British Sign

Language: An Introduction. Cambridge: Cambridge University Press.

TAI, J.H.Y. (1993): "Iconicity: Motivations in Chinese Grammar." Principles and

Prediction: The Analysis of Natural Language, MushiraEid and Gregory

Iverson, eds., Amsterdam: John Benjamins Publishing Company, pp.

153-174.

TAI, James (2005): Modality Effects: Iconicity in Taiwan Sign Language" POLA

FOREVER: Festschrift in Honor of Professor William S-Y. Wang on his 70TH

Birthday. Edited by Dah-an Ho and Ovid J. L. Tzeng. Institute of Linguistics,

Academia Sinica. Taipei. 19-36

TAI, J.H.Y. (2005b): "Space use and iconicity in Taiwan Sign Language"

Invited Speech. The 13th Annual Meeting of the International Association

of Chinese Linguistics. Leiden University, Leiden, Netherland.June , 9-11.

TAI, James (2006): On Modality Effects and Relative Uniformity of Sign Languages"

234 Pre-Conference Proceedings of 14Annual Conference of the International

Association of Chinese Linguistics and 10th International Symposium on

Chinese Languages and Linguistics, 222-240. , Taipei. May

25-28, 2006

TAI, James (2007): Modality Effects and Syntactic Structures of Sign Language.

International Symposium on Language, Culture and Cognition, March 9-10,

2007.

TAI, James (2008): The Nature of Chinese Grammar: Perspectives from Sign Language.

Proceedings of the 20th North American Conference on Chinese Linguistics,

21-40. Ohio, Columbus: The Ohio State University.

TAI, James and TSAY, Jane, eds (2009): Taiwan Sign Language and Beyond. Taiwan

Institute of Humanities. National Chung Cheng University.

TAI, James and TSAY, Jane (2010): Taiwan Sign Language Corpus: Digital Dictionary

and Database.《數位典藏與數位學習國際會議 2010 TELDAP International

Conference》, 41-47,中央研究院.

TAI, James and CHEN, Yijun (2010): Modality and Variation in Sign Languages. 《研

究之樂-慶祝王士元先生七十五壽辰學術論文集》,330-348,上海教育出版

社.

TAMMASAENG, M. (1985): The effects of cued speech upon tonal perception of the

Thai language by hearing impaired children. Unpublished Doctoral

Dissertation. Gallaudet University, Washington, DC.

TAUB, S. (2001): Language from the Body: Iconicity and Metaphor from ASL.

Cambridge: Cambridge University Press.

TOMASELLO, Michael (2008): Origins of Human Communication. Cambridge,

MA/London: The MIT Press. XIII, 393.

TSAI, James and MYERS, James (2009): The Morphology and Phonology of Taiwan 235

Sign Language. In James TAI and Jane TSAY, eds. Taiwan Sign Language and

Beyond. Taiwan Institute of Humanities. National Chung Cheng University.

TSAY, Jane (2007): The Syllable in Taiwan Sign Language. Paper presented at

International Symposium on Language, Culture, and Cognition (ISLCC).

March 9-10, 2007. National Taiwan University, Taipei, Taiwan.

TSAY, Jane (2010): Sonority and Syllable Structure in Taiwan Sign Language. Paper

presented at The Conference on Sign Linguistics and Deaf Education in Asia

2010, January 28-30, 2010. The Chinese University of Hong Kong.

TURNER, Graham (2005): Toward Real Interpreting. In Marc MARSCHARK, Rico

PETERSON and Elizabeth WINSTON, eds. Sign Language Interpreting and

Interpreter Education. Directions for Research and Practice.Oxford University

Press.

VUORIKOSKI, Anna-Riitta (1993): Simultaneous Interpretation: User Experience and

Expectation. In: PICKEN, ed. Translation – the Vital Link. Proceedings of the

13th World Congress of FIT (vol. 1, 317-327), London, Institute of Translation

and Interpreting.

VUORIKOSKI, Anna-Riitta (1998): User Responses to Simultaneos Interpreting. In: L.

BOWKER, M. CRONIN, D. KENNY and J. PEARSON, eds. Unity in

Diversity?Current Trends in Translation Studies. Manchester, St. Jerome

Publishing, 184-187.

WATERS, D. et al. (2007): , signed language, text and picture

processing in deaf native signers: the role of the midfusiform gyrus.

NeuroImage 25, 1287–1302.

WAY, Catherine (2008): Systematic Assessment of Translator Competence: in search

of Achilles’ Heel. In: John KEARNS, ed. Translator and Interpreter Training:

issues, methods and debates. New York: Continuum, 88-103.

236 WILBUR, Ronnie Bring (1987): American Sign Language. Linguistic and Applied

Dimensions. A College-Hill Publication. Little, Brown and Company:

Boston/Toronto/San Diego.

WILCOX, P.P. (2000): Metaphors in American Sign Language. Washington, DC:

Gallaudet University Press.

WILCOX, S. (2004): Conceptual spaces and embodied actions: Cognitive iconicity

and signed languages. Cognitive Linguistics, 15(2), 119–147.

WILLIAMS, Malcolm (2004): Translation quality assessment: an

argumentation-centered approach. Ottawa: University of Ottawa Press.

WINEFIELD, Richard (1987): Never the Twain Shall Meet. Washington, D.C:

Gallaudet University Press, 4.

WINSTON, Elizabeth (2005): Designing a Curriculum for American Sign

Language/English Interpreting Educators. In Marc MARSCHARK, Rico

PETERSON and Elizabeth WINSTON, eds. Sign Language Interpreting and

Interpreter Education. Directions for Research and Practice.Oxford University

Press.

WISE, Steven M. (2003): Drawing the Line: Science and the Case for Animal Rights.

Basic Books.

WOLL, B. and PORCARI LI DESTRI, Gulia (1998): Higher Education Interpreting. In:

A.WEISEL, ed. Proceedings of the 18th Interational Congress on Education of

the Deaf - 1995. Tel Aviv: Ramot Publications - Tel Aviv University.

WOLL, B. (1985): The Ubiquity of Metaphor: Metaphor in Language and Thought.

Ed. By Wolf Parotte and Rene Dirven. Amsterdam: Benjamins.

WOODWARD, J. and ALLEN, T. (1988): Classroom use o artificial sign systems by

teachers. Sign Language Studies, 61, 405-418.

XU, G.-Q. (1999): Xiandai Hanyu Cihui Xitong Lun (Lexicology System in Modern 237

Chinese). Beijing: Beijing University Press.

YANG, Yi-An (2008): Cong tingzhong fanying tantao hanyu chengyu dui kouyi

pinzhi zhi yingxiang (From the response of the audience analysis of the

influence of chengyu on the quality of interpreting) , Fu-jen University

GITIS unpublished Master thesis.

ZHANG, Z.-Z. (2004): Lilun Xiucixue hongguan shiyexia de da xiuci (Rhetoric

Theory: a comprehensive examination of theory). Beijing: China Social

Sciences Press.

ZHANG, Niina Ning (2007): Universal 20 and Taiwan Sign Language. Sign

Language and Linguistics 10 (1): 55-81.

ZHENG, P.-X. (2005): Chengyu Jufa Fenxi jiqi Jiaoxue CelueYanjiu (Study on the

syntax and the pedagogy of Chinese idioms). Unpublished Master Thesis,

National Sun Yat-Sen University, Kaohsiung, Taiwan.

ZHONG, Yong (2005): Plan-based translation assessment. Meta, 50(4).

戴浩一、蔡素娟. 2009. "手語的本質:以臺灣手語為例." 蘇以文、畢永峨(編輯),

《語言與認知》, 125-176. ZHU, J.-N. (1999): Hanyu Cihuixue (Chinese Lexicology). Taipei: Wunan Culture Enterprise.

238

Dictionaries of Taiwan Sign Language Used

Ministry of Education, Special Education Work Group (2000): Changyong Cihui

Shouyu Huace (Sign Language Dictionary of Commonly Used Vocabulary.)

Vol.2. Taipei: Ministry of Education.

Ministry of Education, Special Education Work Group (2000): Changyong Cihui

Shouyu Huace (Sign Language Dictionary of Commonly Used Vocabulary.)

Vol.1. Taipei: Ministry of Education.

Chinese YMCA Hong Kong, Taipei YMCA, Kuala Lumpur YMCA, Osaka YMCA

eds. (1989): Speaking with Signs.(Fourth Version) Osaka: Osaka YMCA. [A

dictionary of , Taiwan Sign Language, Japanese Sign

Language and . with a page of fingerspelling of Korean

Sign Language.

Ministry of Education (1987): Shouyu Huace (Sign Language Dictionary.) Vol. 2.

Taipei: Ministry of Education.

Chinese YMCA Hong Kong, Taipei YMCA, The Society for the Deaf in Selangor

and the Federal Teritory, Osaka YMCA eds. (1984): Speaking with Signs.(Third

239

Version) Osaka: OsakaYMCA. [A dictionary of Hong Kong Sign Language,

Taiwan Sign Language, Japanese Sign Language and Malaysian Sign Language.

with a page of fingerspelling of Korean Sign Language.

Yang, Chiung-huang and Lu Nan-Chou (1984): Piaochun shouyu shout'se. Taipei:

Lung ya fu li hsieh hui.

Chinese YMCA Hong Kong, Taipei YMCA, The Society for the Deaf in Selangor

and the Federal Teritory, Osaka YMCA eds. (1980): Speaking with Signs.

(Second Version) Osaka: Osaka YMCA. [A dictionary of Hong Kong Sign

Language, Taiwan Sign Language and Japanese Sign Language.

Chinese YMCA Hong Kong, Taipei YMCA, The Society for the Deaf in Selangor

and the Federal Teritory, Osaka YMCA eds. (1979): Speaking with Signs.(Book

One) Hong Kong: Chinese YMCA. [A dictionary of Hong Kong Sign Language,

Taiwan Sign Language and Japanese Sign Language

Li, Junyu (1978): Shouyu Huace ( Sign Language Dictionary). Taipei: Ministry of

Education, Department of Social Education.

240 APPENDIX I

In this appendix, I will provide the reader with the original Chinese version52 of the

tables that I translated throughout the thesis.

Table 1 Chinese version

提供手語翻譯及視力協助服務人員資格及補助標準表

類別 服務內容 應具備資格 補助標準 備註

一 般 性會 議 、 課第一類:符合下列資格之一並可3. 符合第一類資格者,每1. 同等級 程: 提供證明文件者: 小時補助新臺幣(以 指曾擔 下同)一千元;符合 任手語 1.會議或研討 5. 有手語翻譯技術士證(含同等 第二類資格者,每小 翻譯員 級)或領有手語翻譯員資格 2.工作訓練 時 補 助 新 臺 幣 五 百 命題委 證明(含同等級)後,擔任 元。 員暨評 3. 涉 及技 術 操 作 手語翻譯服務滿二百小時以 4. 申請手語翻譯服務之 審委員 及 測 驗 較 複 雜 上可提供證明文件者。 個案,每人每月最高 者或取 之面試 6. 經政府認可、補助或委辦手語 以補助十小時、每年 得手語 翻譯專業訓練滿二百小時並 4.其他 不超過一百二十小時 翻譯技 手 擔任手語翻譯服務滿二百小 為原則。如個案有特 術士監 語 時以上。 翻 殊需求,可依實際狀 評資格 7. 擔任手語翻譯服務滿四百小 譯 況 酌 予 增 加 補 助 時 人員 服 時以上。 務 數。 者。 8. 有手語翻譯技術士或手語翻 2. 具備第 譯員資格證明(含同等級) 一類資 後翻譯服務未滿二百小時。 格之人 1.簡易面談 第二類:符合下列資格之一並可 員,得 提供證明文件者: 2. 職 場溝 通 及 輔 提供第 導 3. 經政府認可、補助或委辦手語 二類服 翻譯專業訓練滿二百小時擔 務。 任手語翻譯服務滿一百小時 以上。 4. 手語翻譯服務滿二百小時。

52 Source: http://ir.chna.edu.tw/bitstream/310902800/9160/2/177003%E8%A1%93%E7%A7%91.pdf 241

職業訓練 符合第一、二類資格者 公私立職業訓練機構或接 受政府委託辦理職業訓練 之單位辦理職業訓練招收 聽、語障學員之班次,每 班 得 編 列 手 語 翻 譯 員 一 名,其酬勞每日最高以一 千五百元編列。

Table 5 Chinese version 口語翻譯成手語

應檢人姓名 准考證號碼 第 □ 場 試題名稱 手 語 翻 手 語 評 審 項 目 (一) 凡有下列情事之一者,為不及格。(於該項□內打勾) □ 1.缺考。 □ 3.冒名代人檢 定者。 □ 2.未完成。(含中途棄權) □ 4.不遵守考場規定,經勸導無效 者。 (二) 凡無上項任一情事者,即作下列各項評分: 比 最高給 實給分 備 評審項目 給分參考標準 重 分 數 註 流暢度分五等級: 優(給21~25 分) 25 良(給16~20 分) 1. 手語翻譯技巧 25 分 % 可(給11~15 分) 尚可(給6~10 分) 差(給 1~5 分) 正確度分五等級: 優(給21~25 分) 良(給16~20 分) 2. 內容表達正確性 可(給11~15 分) 尚可(給6~10 分) 差(給 1~5 分) 3. 手語詞彙變化與 遺漏一個口語詞彙扣 0.5

242 應用 分 語氣(抑揚頓挫)分四等 級: 優(給16~20 分) 4. 語氣 良(給11~15 分) 可(給6~10 分) 差(給 1~5 分) 時間控制分五等級: 優(給21~25 分) 5. 口語配合手語時 良(給16~20 分)

間 可(給11~15 分) 尚可(給6~10 分) 差(給 1~5 分) 合 100 100 分 計 % 註:本項目滿分為 100 分,實得分數達 60 分以上者為及格 共得分數: 評 審 1. □ 及格 2. □ 不及 結 果 格 分 監 評 人 員 (請勿於測試結束前先行簽名) 簽 名

Table 6 Chinese version 手語翻譯成口語

應檢人姓名 准考證號碼 第 □ 場 試題名稱 手 語 翻 手 語 評 審 項 目 (一) 凡有下列情事之一者,為不及格。(於該項□內打勾) □ 1.缺考。 □ 3.冒名代人檢 定者。 □ 2.未完成。(含中途棄權) □ 4.不遵守考場規定,經勸導無效 者。 (二) 凡無上項任一情事者,即作下列各項評分: 比 最高給 實給分 備 評審項目 給分參考標準 重 分 數 註

243

流暢度分五等級: 優(給21~25 分) 25 良(給16~20 分) 1. 口語翻譯技巧 25 分 % 可(給11~15 分) 尚可(給6~10 分) 差(給 1~5 分) 正確度分五等級: 優(給21~25 分) 良(給16~20 分) 2. 內容表達正確性 可(給11~15 分) 尚可(給6~10 分) 差(給 1~5 分) 3. 口語詞彙變化與 遺漏一個口語詞彙扣 0.5

應用 分 語氣(抑揚頓挫)分四等 級: 優(給16~20 分) 4. 語氣 良(給11~15 分) 可(給6~10 分) 差(給 1~5 分) 時間控制分五等級: 優(給21~25 分) 5. 口語配合手語時 良(給16~20 分)

間 可(給11~15 分) 尚可(給6~10 分) 差(給 1~5 分) 合 100 100 分 計 % 註:本項目滿分為 100 分,實得分數達 60 分以上者為及格 共得分數: 評 審 1. □ 及格 2. □ 不及 結 果 格 分 監 評 人 員 (請勿於測試結束前先行簽名) 簽 名

Table 7 and 8 Chinese version 口手語雙向翻譯

應檢人姓名

244 准考證號碼 第 □ 場 試題名稱 手 語 翻 手 語 評 審 項 目 (一) 凡有下列情事之一者,為不及格。(於該項□內打勾) □ 1.缺考。 □ 3.冒名代人檢 定者。 □ 2.未完成。(含中途棄權) □ 4.不遵守考場規定,經勸導無效 者。 (二) 凡無上項任一情事者,即作下列各項評分: 比 最高給 實給分 備 評審項目 給分參考標準 重 分 數 註 流暢度分五等級: 優(給21~25 分) 1. 口語再加手語翻 25 良(給16~20 分) 25 分 譯技巧 % 可(給11~15 分) 尚可(給6~10 分) 差(給 1~5 分) 正確度分五等級: 優(給21~25 分) 良(給16~20 分) 2. 內容表達正確性 可(給11~15 分) 尚可(給6~10 分) 差(給 1~5 分) 3. 口語詞彙變化與 遺漏一個口語詞彙扣 0.5

應用 分 語氣(抑揚頓挫)分四等 級: 優(給16~20 分) 4. 語氣 良(給11~15 分) 可(給6~10 分) 差(給 1~5 分) 時間控制分五等級: 優(給21~25 分) 5. 口語配合手語時 良(給16~20 分)

間 可(給11~15 分) 尚可(給6~10 分) 差(給 1~5 分)

245

合 100 100 分 計 % 註:本項目滿分為 100 分,實得分數達 60 分以上者為及格 共得分數: 評 審 1. □ 及格 2. □ 不及 結 果 格 分 監 評 人 員 (請勿於測試結束前先行簽名) 簽 名

Table 9 Chinese version 手語翻譯丙級技術士技能檢定術科測試評分標準表(聽人監評用)

科 別 科 術

第 三 第 第 ︵ 站 ︵ 二 一 種 技 總 : 總 站 站 分 分

100 口 100 : : 手 手 不 語 語 評 分 雙 分 翻 類 能 ︶ ︶ 向 口 翻 語 譯

詞 表 手 語 口 口 語 口 內 口 彙 情 語 氣 語 語 氣 語 容 語 評 變 、 翻 翻 配 詞 表 翻 化 儀 譯 譯 合 彙 達 譯 與 態 技 技 手 變 正 技 應 巧 巧 語 化 確 巧 用 時 與 性 間 應 準 分 用

246 數

例 20 20 20 20 20 20 25 25 25 5

% % % % % % % % %

Table 10 Chinese version 手語翻譯丙級技術士技能檢定術科測試評分標準表(聽障監評用)

科 別 科 術

100 第 第 三 一 分 第 ︵ 站 站 二 種 技 ︶ 總 : : 站 分

100 口 不 : 手 評 手 語 語 分 雙 翻 ︵ 類 能 ︶ 向 口 翻 總 語 譯 分

手 表 手 內 手 手 表 手 內 手 語 情 語 容 語 語 情 語 容 語 評 配 、 詞 表 翻 配 、 詞 表 翻 合 儀 彙 達 譯 合 儀 彙 達 譯 口 態 變 正 技 口 態 變 正 技 語 化 確 巧 語 、 化 確 巧 時 與 性 時 裝 與 性 間 應 間 扮 應 準 分 用 用

247

例 20 25 25 25 20 25 25 25 5 5

% %

% % % % % % % %

Table 11 Chinese version

手語翻譯丙級技術士技能檢定術科測試監評總表

應檢人 年 月 檢定日期 姓名 日

准考證 監評長簽名 (請勿於測試結束前先行簽名) 號碼

及格 不及格 缺考 總評結 果

應 各站評審結果 檢

人 監評人員 缺考 項目 分數 到 簽名 缺 及格 不及格 檢 考 註 記

第一站 口語翻手語

第二站 手語翻口語

248

口手語雙向 第三站 翻譯

Fig 9

1. Basic information

Applicant Telephone

(Unit) E-mail

2. Information concerning today’s service

Date

Name of Interpreter

Hour From… to…

Have you paid the interpreting fee yet? Yes,….. NTD

No

Where did you learn about the interpreting Governmental organization service? Deaf organization

Friends

Website (which one)

Other

3. Interpreting service satisfaction

Very satisfied Satisfided No Not Extremely

comment satisfied not satisfied

Administrative staff

249 explanation of the service

Service attitude

Service efficiency

Interpreters’ service attitude

Interpreters’ punctuality

Sign language ability

Demeanor and appearance

250 APPENDIX II

In this appendix, I will present the transcript of the source speech which was played to the experiment subjects in chapter six.

After the transcription of the source speech, I will also provide the reader with the transcripts of the two interpreted versions of all subjects.

The link was transcribed by the following youtube link: http://www.youtube.com/watch?v=3V5tg4SvGksandfeature=relmfu

Source speech53:

商周兩岸電信論壇領袖頂峰會-王建宙、呂學錦、焦威文、王文靜對談

Question:

最後兩個問題,是我代表企業界,來作的提問。這個問題是英華達的副經理張雪玲

她所問的。她問了王董事長, 她問說如何刺激中國消費者在 3G 平臺上消費,我們

談了這麽多,到底要怎麽去刺激中國的消費者在 3G 平臺上消費? 中國移動做法是

什麽?

Answer: 其實剛才已經談到了這方面的事情, 就是最終的還是要看消費,因爲投資的是

一種固定資產的投資,是一種(英文??)消耗吧。那麽這個都是一個企業的一

種行爲。 但是作爲電信行業, 它本身是一個服務行業。 它最終要靠消費者的

使用,纔是我們能夠可以說投資的。所以其實我們最關心的也是怎麽來啓動消

費。那麽我想這個消費本身已經存在著巨大的需求了,那麽籍助於 3G 呢我覺

53 Subjects were instructed to listen to the question and interpret the answer only. 251

得除了前面所說的速率更高速度更快等等以外, 最重要的是籍助於 3G 能夠創

造出更多的新的消費。再創造出幾個像移動音樂一樣新的消費。 那麽前面所說

的物聯網,手機支付,電子閲讀,我覺得在 2G 時代也能做,但是在 3G 時代會

做得更好; 所以我覺得創造一些新的消費的項目,這是最重要的。

Subject A:

First version: 真的(其實)/剛剛/談/完了/有// REALLY/JUST/TALK/FINISHED/HAVE// 真的/看/買人// REALLY/LOOK/BUYER// 投資/是/固定// INVEST/BE/FIXED// 這個/是/公司/做// THIS/BE/COMPANY/DO// 服務/做/靠/買人/用 (花錢)++// SERVICE/DO/RELY/BUYER/USE(SPEND)// 公司/回/賺錢/可以// COMPANY/BACK/EARN/MONEY/CAN 關心/什麽// CARE/WHAT// 鼓勵/買++// SPUR/BUY// 買/要/一樣// BUY/MUST/THE SAME// 3G/我覺得/剛剛/談/完了/有// 3G/THINK/JUST/TALK/FINISHED/HAVE// 速度/快/除此以外// SPEED/FAST/OTHER// 重要/什麽// IMPORTANT/WHAT// 靠/3G/鼓勵/買++// RELY/3G/SPUR/BUY//

252 剛剛/說/網/ 手機/買++/IPAD/閲讀// JUST/SPEAK/NET/CELL/BUY/IPAD/READ// 2G/說/一樣/可以// 2G/SAY/SAME/CAN// 但/3G/做/越// BUT/3G/DO/EXCEED// 鼓勵/ 新/買/可以// SPUR/NEW/BUY/CAN//

Second version: 真的(其實)/剛剛/談/完了/有// REALLY/JUST/TALK/FINISH/HAVE// 做/真的/看/買人// DO/REALLY/LOOK/BUYER// 投資/是/固定// INVEST/BE/FIXED// 公司/做/最近/服務/事情/靠/買人/用/買人// COMPANY/DO/RECENT/SERVICE/THINGS/RELY/BUYER/USE/BUYER// 公司/賺錢/可以// COMPANY/EARN/CAN// 重/關心/什麽 // HEAVY/CARE/WHAT// 鼓勵/買// SPUR/BUY// 我/覺得/買// I/THINK/BUY// 3G/速度/快// 3G/SPEED/FAST// 除此以外/重要/什麽// OTHER/IMPORTANT/WHAT// 靠/3G/幫忙/買// RELAY/3G/HELP/BUY// 像/情形/剛剛/說// LIKE/SITUATION/JUST/SAY// 網/ IPAD/讀// NET/IPAD/READ// 我/覺得/2G/做/一樣//

253

I/THINK/2G/DO/SAME// 3G/一樣/可以// 3G/SAME/CAN// 勝過// WIN// xxx

Subject B:

First version: 剛/全部/都/講/完了// JUST/COMPLETE/ALL/TALK/FINISHED// 最終/看/情形// EVENTUALLY/LOOK/SITUATION// 其實/投資/目的/什麽//ACTUALLY/INVEST/PURPOSE/WHAT// 是/消費/各式各樣/行爲 BE/CONSUME/DIFFERENT TYPES/BEHAVIOR// 像/電信// LIKE/TELECOMMUNICATION// 什麽/靠// WHAT/RELY// 你們/用/幫/我們/投資/回收// YOU/USE/HELP/WE/INVEST/RETRIEVE// 其實/我們/重視/什麽// ACTUALLY/WE/RESPECT/WHAT// 關心/開始+建制/啓動// CARE/START+ORGANIZE/INITIATE// 我/想/消費/沒有/發生/事// I/THINK/CONSUME/NOT HAVE/HAPPEN/THINGS// 每個人/要// EVERYBODY/MUST// 3C/速度/快// 3C/SPEED/FAST// 還有/可以/借有/什麽// FURTHEMORE/CAN/BORROW+USE/WHAT// 開始/促進/消費// START/BOOST/CONSUME//

254 像/音樂/可以/開始// LIKE/MUSIC/CAN/START// 還有/手機/閲讀/等等// FURTHERMORE/CELL/READ/ETC// 我/覺得/之後/可以/做/(進步)// I/THINK/AFTER/CAN/DO/(PROGRESS)// 我/想/開始/發明/新// I/THINK/START/INVENT/NEW//

Second version: 其實/我/剛剛/說/完/這些事// ACTUALLY/I/JUST/SAY/FINISHED/THESE THINGS// 最終/要/看/什麽// EVENTUALLY/MUST/LOOK/WHAT// 你們/是/以後/能夠/投資/目的/是/什麽// YOU/BE/AFTER/CAN/INVEST/PURPOSE/BE/WHAT// 是/靠/每個人/用// BE/RELY/EVERYBODY/USE// 像/商業/行爲/電信/最終/要/靠/每一個人/用// LIKE/BUSINESS/BEHAVIOR/TELECOMMUNICATION/EVENTUALLY/MUST /RELY/EVERYBODY/USE// 幫/我們/全部/投資/回收// HELP/WE/COMPLETE/INVEST/RETRIEVE// 我們/關心/什麽// WE/CARE/WHAT// 就是/開始+建置// BE/START+ORGANIZE// 我/想/使用/每個人/需求// I/THINK/USE/EVERYBODY/NEED// 除/3C/之外// EXCEPT/3C/ELSE// 速度/可以/加快// SPEED/CAN/QUICKEN// 我們/開始/發明/新的// WE/START/INVENT/NEW// 還有/音樂/手機/閲讀/等等// FURTHERMORE/MUSIC.CELL/READ/ETC// 我/覺得/3C/要/開始//

255

I/THINK/3C/MUST/BEGIN// 想/發明/新的/使用/重要// THINK/INVENT/NEW/USE/IMPORTANT//

Subject C:

First version: 剛/討論/完了// JUST/DISCUSS/FINISHED// 投資/目的/是/什麽// INVEST/PURPOSE/BE/WHAT// 像/固定/資產/本來/就是/一種/消費型// LIKE/FIXED/CAPITAL/ORIGINALLY/BE/A TYPE/CONSUME MODE// 公司/本來/有/這種/習性// COMPANY/ORIGINALLY/HAVE/THIS TYPE/USE// 電信/公司/本來/是/服務業// TELECOMMUNICATIONS/COMPANY/ORIGINALLY/BE/SERVICE SECTOR// 如何/幫忙/消費者/刺激/他們/來/使用// HOW/HELP/CONSUMER/BOOST/THEY/COME/USE// 我/比較/關心/是/如何/刺激/吸引/他們/來/消費// I/RATHER/CARE/BE/HOW/BOOST/LURE/THEM/COME/CONSUME// 那些/消費/需求//多存在// THOSE/CONSUME/NEED/EXIST// 剛/提到/速度/快/速率/高/之外// JUST/MENTION/SPEED/FAST.EFFICIENCY/HIGH/APART// 最終/目的/要/利用/3g/創造/新/消費// EVENTUALLY/PURPOSE/MUST/USE/3G/INNOVATE/NEW/CONSUME// 我們/提到/音樂/物聯網/電子書/手機/付錢/等等// WE/MENTION/MUSIC/NETWORK/ELECTRONIC BOOKS/CELL/PAY/ETC 而/2G/時代/做/可以// AND/2G/ERA/DO/CAN// 3g/一樣/可以// 3G/SAME/CAN// 像/這些/都/是/新的/消費/都/很/重要// LIKE/THESE/ALL/BE/NEW/CONSUME/ALL/VERY/IMPORTANT//

Second version:

256 剛/討論/完了// JUST/DISCUSS/FINISHED// 最終/目的/是/要/讓/消費者/願意/來/使用// EVENTUALLY/PURPOSE/BE/MUST/MAKE/CONSUMERS/WILL/COME/USE/ / 這些/都/是/公司/習慣// THESE/ALL/BE/COMPANY/HABIT// 本來/電信/公司/也/是/服務業// ORIGINALLY/TELECOMMUNICATIONS/ALSO/BE/SERVICE SECTOR// 要/消費者/有/使用/投資/可以/獲利// MUST/CONSUMER/HAVE/USE/INVEST/CAN/WIN// 所以/要/刺激/消費者/使用// SO/MUST/STIMULATE/CONSUMERS/USE// 這些/需求/都/存在// THESE/NEEDS/ALL/EXIST// 像/我們/剛/提到/速率/高/速度/快// LIKE/WE/JUST/MENTION/EFFICIENCY/HIGH/SPEED/FAST// 這些/需求/2g/可以/做得到// THESE/NEEDS/2G/CAN/DO// 可以/刺激到/更多/信心/消費// CAN/BOOST/MORE/CONFIDENCE/CONSUME// 比如説/物聯網//手機/服務費//電子書/等等// FOR EXAMPLE/INTERNET/CELL/SERVICES SECTOR/ELECTRONIC BOOKS/ETC// 以前/2g/可以/做// BEFORE/2G/CAN/DO// 現在/升級到/3g/也/可以/做// NOW/INCREASE/3G/ALSO/CAN/DO// 所以/要/創造/新的/消費/最/重要的// THEREFORE/MUST/CREATE/NEW/CONSUME/IMPORTANT//

Subject D:

First version: 真的/剛剛/討論過// REALLY/JUST/DISCUSS//

257

這/事情// THIS/THING// 再/看/消費// AGAIN/LOOK/CONSUME// 股票/他們/是/固定/是// STOCK/THEY/BE/FIXED/BE// 消費/針對/這個// CONSUME/RESPECT TO/THIS// 企業/公司/是/針對/這個/用// ENTERPRISE/COMPANY/BE/RESPECT TO/THIS/MONTH// 但/電信/服務/靠/什麽// BUT/TELECOMMUNICATIONS/SERVICES/RELY/WHAT// 消費/族區// CONSUME/GROUP// 他們/是/用/這個// THEY/BE/USE/THIS// 賺錢/會// EARN/CAN// 真的/關心/什麽// REALLY/CARE/WHAT// 希望/消費者/他們// HOPE/CONSUMERS/THEY// 想/購買/要/有// WANT/BUY/MUST/HAVE// 3g/靠/這個// 3G/RELY/THIS// 剛/說過/速度/快// JUST/SAY/SPEED/FAST// 品質/提升// QUALITY/INCREASE// 再/靠/什麽// AGAIN/RELY/WHAT// 3g/刺激/其它/鏈接// 3G/BOOST/OTHER/LINK// 就像/音樂/物聯網/第三/手機/付錢/電子閲讀// LIKE/MUSIC/INTERNET/THIRD/CELL/PAY/ELECTRONIC READING/ 之前/2G/做過/可以// BEFORE/2G/DO/CAN//

258 但是/3g/程度/更好// BUT/3G/LEVEL/BETTER//

Second version: 剛剛/討論/過/這事// JUST/DISCUSS/PAST/THIS// 最後/看/什麽// EVENTUALLY/LOOK/WHAT// 消費/族區/他們// CONSUME/GROUP/THEY// 爲什麽/股票/固定// WHY/STOCK MARKET/FIXED// 花費// SPEND// 公司/企業/它們/大部分//(都是用股票) COMPANY/ENTERPRISE/THEY/MOST PART// 但/電信/服務/是/靠/什麽// BUT/TELECOMMUNICATIONS/BE/RELY/WHAT// 消費/族區/他們// CONSUME/GROUP/THEY// 但/花費/要// BUT/SPEND/MUST// 賺錢/會// EARN/CAN// 真的/關心/什麽// REALLY/CARE/WHAT// 希望/消費/族區// HOPE/CONSUME/GROUP// 他們/刺激/購買/要// THEY/STIMULATE/BUY/MUST// 像/消費/要/用/3g/依靠/這個// LIKE/CONSUME/MUST/USE/3G/RELY/THIS// 之前/說過/速度/快/品質/提升// BEFORE/SAY/SPEED/FAST/QUALITY/INCREASE// 再/靠/什麽// AGAIN/RELY/WHAT// 重要/希望/這/3G//

259

IMPORTANT/HOPE/THIS/3G// 消費/靠/他們/刺激// CONSUME/RELY/THEY/STIMULATE// 就像/音樂/物聯網/手機/付錢/電子閲讀// LIKE/MUSIC/INTERNET/CELL/PAY/ELECTRONIC READING// 之前/2g/做過// BEFORE/2G/DO// 但是/3g/做/更好// BUT/3G/DO/BETTER// 這/重要// THIS/IMPORTANT//

Subject E:

First version: 剛/全部/講/完了// JUST/COMPLETE/TALK/FINISHED// 最後/看/情形// EVENTUALLY/LOOK/SITUATION// 其實/股票/目的/什麽// ACTUALLY/STICK MARKET/PURPOSE/WHAT// 是/消費/行爲 BE/CONSUME/BEHAVIOR// 像/電信//什麽/靠// LIKE/TELECOMMUNICATION/WHAT/RELY// 你們/幫/我們/股票/回收// YOU/HELP/WE/VOTE/RETRIEVE// 但/我們/重視/什麽// BUT/WE/VALUE/WHAT// 關心/開制// CARE/INITIATE// 我/想/消費(花錢)/沒有/發生/事 I/THINK/CONSUME/NOT HAVE/HAPPEN/THINGS// 每個人/要// EVERYBODY/MUST// 3C/速度/快 3C//SPEED/FAST//

260 還有/可以/通過/什麽// FURTHERMORE/CAN/PASS/WHAT// 開始/促進/花費// START/PROMOTE/SPEND MONEY// 像/音樂/開始/可以// LIKE/MUSIC/START/CAN// 還有/手機/閲讀/等等// FURTHERMORE/CELL/READING/ETC// 我/覺得/之後/可以/做/(進步)// I/THINK/AFTER/CAN/DO/PROGRESS// 我/想/開始/發明/新/可以// I/THINK/START/INVENT/NEW/CAN//

Second version: 其實/剛剛/談/完了/有// ACTUALLY/JUST/TALK/FINISHED/HAVE// 真的/看/買/人// REALLY/LOOK/BUYER// 股票/是/固定// STOCK MARKET/BE/FIXED// 公司/做/最近/服務/事情/靠/買人// COMPANY/DO/RECENTLY/SERVICE/THINGS/RELY/BUYER// 公司/賺錢/可以// COMPANY/EARN/CAN// 關心/什麽 // CARE/WHAT// 鼓勵/買// ENCOURAGE/BUY// 我/覺得/買// I/THINK/BUY// 3G/速度/快// 3G//SPEED/FAST// 除此以外/重要/什麽// OTHER/IMPORTANT/WHAT// 靠/3G/幫忙/花費// RELY/3G/HELP/SPEND// 像/情形/剛剛/說// LIKE/SITUATION/JUST/SAY//

261

網/ 電子閲讀// NET/ELECTRONIC READING// 我/覺得/2G/做/可以// I/THINK/2G/DO/CAN// 3G/一樣/可以/更好// 3G/SAME/CAN/BETTER// 這/重要// THIS/IMPORTANT//

Oral interpreted versions

Subject F

First version: Actually we just talked about this thing, eventually we still have to look at consumers, because investing is a type of fixed assets, some kind of consumption.

This is a business-like behavior, but in the sector of telecommunications, in itself it is part of the service industry. It has to rely on consumer’s use, this is the only way for us to be able to talk about investing. So, what we most care about is also how to spur consumption. In consumption itself there is a huge demand and by means of the

3G, apart from what we have already talked about like being faster and more efficient, the most important thing is to create new types of consumptions, like portable music, or mobile payment, electronic reading. These services could be implemented in the 2g era and in the 3g they can be even better. So, I think creating new services is the most important thing.

Second version: Actually we just talked about it, eventually we will have to focus on consumers, because investing is a type of fixed assets, some kind of consumption. It’s a business-like behavior, but in the sector of telecommunications, it can be considered

262 as part of the service industry. It has to rely on consumption, this is the only way for us to be able to talk about investing. So, what we most care about is how to spur consumption. In consumption itself there is a huge demand and thanks to the 3G, apart from what we have already talked about like being faster and more efficient, the most important thing is to create new types of consumptions, like portable music, or mobile payment, electronic reading, and so on These services could be implemented in the 2g era and in the 3g era they can be done even better. So, I think creating new services is the most important thing.

Subject G

First version: Actually we just talked about this, at the end we will have to look at consumers, because investing is a type of fixed, a type of consumption. It is an entrepreneurial behavior, but in the sector of telecommunications, it is also a part of the service industry. It has to rely on consumers, this is the only possible way to talk about investing. So, we think it is important to encourage consumption. In consumption itself there is a huge demand and by means of the 3G, apart from what we have already talked about like being faster and more efficient, the most important thing is to create new types of consumptions, like music, or mobile payment, electronic reading. These services could be implemented in the 2g era and in the 3g they can be carried out even better. So, I think creating new services is crucial.

Second version: Actually we just mentioned this, at the end of the day we still have to look at consumers, because investing is a type of fixed assets, some kind of consumption.

This is an entrepreneurial attitude, but in the sector of telecommunications, in itself it is part of the service industry. It has to rely on consumption, the only way to be 263 able to talk about investing. So, what we most care about is also how to encourage consumption. In consumption itself there is a huge demand and by means of the 3G, other than what we have already talked about like rapidity and efficiency and more efficient, the most important thing is to create new types of consumptions, like portable music, or mobile payment. These services could be implemented in the 2g era and in the 3g they can be even better. So, I think creating new services is the most crucial aspect.

Subject H

First version:

As we just mentioned, eventually we will still have to look at consumers, because investing is a type of fixed assets, some kind of consumption. It is a business-oriented behavior, but in telecommunications, it becomes part of the service industry. It has to rely on consumption, the only way to talk about investing.

So, what we most care about is also how to incentivate consumption. In consumption itself there is a huge demand and by means of the 3G, apart from what we have already mentioned like being faster and more efficient, the most important thing is to create new types of services, like portable music, and electronic reading.

These services could be implemented in the 2g era and in the 3g they can be even more efficient. So, I think creating new services is the most important thing.

Second version:

As we just talked about this, at the end we still have to look at consumers, because investing is a type of fixed assets, some kind of consumption. It is a business, but in the sector of telecommunications, it is also part of the service industry. It has to rely

264 on consumption, this is the only way for us to be able to talk about investing. So, what we most care about is also how to boost consumption. In consumption there is a huge demand and thanks to the 3G, apart from what we have mentioned already like being faster and more efficient, the most important thing is to create new types of consumptions, like portable music, mobile payment, and electronic reading.

These services could be implemented in the 2g era and in the 3g they can be improved. So, I think creating new services is the most important aspect.

Subject I

First version:

Actually we just talked about this thing, eventually we still have to look at consumers, because investing is some kind of consumption. This is a business behavior, but in the sector of telecommunications, it is part of the service industry. It has to rely on consumers’ uses, this is the only way for us to be able to talk about investing. So, what we most care about is how to boost consumption. In consumption there is a huge demand and by means of the 3G, apart from what we have already talked about like the advantages of being faster and more efficient, the most important thing is to create new types of consumptions, like portable music, or online payment, electronic reading. These services could be implemented in the 2g era and in the 3g they can be even better. So, I think creating new services is the most important thing.

Second version:

As we just mentioned, eventually we still have to consider consumers, because investing is a type of fixed assets, some kind of consumption. It is a risky attitude, 265 but in the sector of telecommunications, it is part of the service industry. It has to rely on consumers’ needs, this is the only way for us to be able to talk about investing. So, what we most care about is also how to boost consumption, where there is a huge demand and by means of the 3G, apart from what we have already talked about like being faster and more efficient, the most important thing is to create new types of consumptions, like portable music, or mobile payment. These services could be implemented in the 2g era and in the 3g they can be even better. So,

I think creating new services is very important.

Subject L

First version: Actually we just talked about this thing, at the end we still have to look at consumers, because investing is a fixed assets, some kind of consumption. This is a business-like behavior, but in the sector of telecommunications, in itself it is part of the service industry. It has to rely on consumers’ uses, this is the only way for us to be able to talk about investing. So, we emphasize the importance of boosting consumption. Consumption itself has a huge demand and by means of the 3G, apart from what we have already talked about like being faster and more efficient, the most important thing is to create new types of consumptions, like portable music, or mobile payment, electronic reading. These services were implemented in the 2g era and in the 3g era they can be improved. So, I think creating new services is very important.

Second version: Actually we just talked about this thing, at the end we still have to look at consumers,

266 because investing is some kind of consumption. This is a business-oriented behavior, but in the sector of telecommunications, in itself it is part of the service industry. It has to rely on consumption, this is the only way for us to be able to talk about investing. So, we emphasize the importance of boosting consumption. Consumption itself has a huge demand and by means of the 3G, apart from what we have already talked about like being faster and more efficient, the most important thing is to create new types of consumptions, like music and electronic reading. These services were implemented in the 2g era and in the 3g era they can be improved even more.

So, I think creating new services is extremely important.

267

Appendix III: TSL handshapes

Taken from Chang and Tai (2005)

零 一 二 三 四 五 LING YI ER SAN SI WU

六 七 八 九 十 二十 LIU QI BA JIU SHI ERSHI

三十 四十 五十 六十 七十 八十 SANSHI SISHI WUSHI LIUSHI QISHI BASHI

(K) WC 千 女 手 方 (K) WC QIAN NÜ SHOU FANG

兄 (奶奶) (高) (布袋戲) 同 守 XIONG (NAINAI) (GAO) (BUDAIXI) TONG SHOU

呂 男 姐 果 很 胡 LÜ NAN JIE GUO HEN HU

268

借 拳 隻 紳 博 棕 JIE QUAN ZHI SHEN BO ZONG

童 筆 菜 (爺) (矮) 萬 TONG BI CAI (YE) (AI) WAN

像 語 (細) 飛機 錢 鴨 XIAN YU (XI) FEIJI QIAN YA

龍 薑 蟲 雞 (鵝) 難 LONG JIANG CHONG JI (E) NAN

269