Steps Toward a Grammar Embedded in Data Nicholas Thieberger

Steps Toward a Grammar Embedded in Data Nicholas Thieberger

<p><strong>Steps toward a grammar embedded in data </strong></p><p><em>Nicholas Thieberger </em></p><p><strong>1. Introduction</strong><sup style="top: -0.38em;"><strong>1 </strong></sup></p><p>Inasmuch as a documentary grammar of a language can be characterized – given the formative nature of the discussion of documentary linguistics (cf. Himmelmann 1998, 2008) – part of it has to be based in the relationship of the analysis to the recorded data, both in the process of conducting the analysis with interactive access to primary recordings and in the presentation of the grammar with references to those recordings. <br>This chapter discusses the method developed for building a corpus of recordings and time-aligned transcripts, and embedding the analysis in that data. Given that the art of grammar writing has received detailed treatment in two recent volumes (Payne and Weber 2006, Ameka, Dench, and Evans 2006), this chapter focuses on the methodology we can bring to developing a grammar embedded in data in the course of language documentation, observing what methods are currently available, and how we can envisage a grammar of the future. <br>The process described here creates archival versions of the primary data while allowing the final work to include playable versions of example sentences, in keeping with the understanding that it is our professional responsibility to provide the data on which our claims are based. In this way we are shifting the authority of the analysis, which traditionally has been located only in the linguist’s work, and acknowledging that other analyses are possible using new technologies. The approach discussed in this chapter has three main benefits: first, it gives a linguist the means to interact instantly with digital versions of the primary data, indexed by transcripts; second, it allows readers of grammars or other analytical works to verify claims made by access to contextualised primary data (and not just the limited set of data usually presented in a grammar); third, it provides an archival form of the data which can be accessed by speakers of the language. The great benefit for typologists is that they can access annotated corpora from fieldwork-based research in order to address topics not considered by the original analyst. Well-structured linguistic data will allow a number of outputs to be created, including a grammatical description writ- </p><p>366 <em>Nicholas Thieberger </em></p><p>ten as a book, but with other potential ways of visualizing the data. The citable archival form of the data can be derived in more ephemeral multimedia or online forms suited to ‘mobilising’ the data (Nathan 2006). For example, archival files are typically very high resolution and are accordingly very large. To deliver them via the web or on a CD or DVD attached to a grammar requires that they be converted to a lower resolution. Similarly, it is now common to create lexical databases from which a dictionary can be derived. This separation of underlying forms of the data from delivery forms is central to the methods discussed in this chapter. <br>Writing a grammar of a previously undescribed language is a major undertaking, typically the endpoint of fieldwork-based linguistic research, of which the methodology has recently undergone significant changes with the introduction of new tools for digital recording, transcription and analysis. The process of recording such a language has, until recently, relied little on an empirical dataset and more on the genius of the researcher – observing and writing notes while they live in a community of speakers. The resulting work, the grammar, is a crafted collection of these observations and, more often than not, it is only fieldnotes and written texts that are recorded, with perhaps a few tapes to confirm phonological claims (cf. Dixon 2006). The innovative approach discussed in this chapter does not supersede the linguist’s role, it enhances the scientific basis of the linguist’s work by providing ready access to primary data. The decisions about what to record, how to record it, and all the normal elicitation and experimental techniques that make up fieldwork today are still implemented by the linguist, but it is in the methods of recording, naming and transcribing the field materials and consequent access to data that novel outcomes can be achieved. An opportunistic corpus cannot answer all questions asked by a grammar writer and there will always be a need to elicit forms, especially for paradigms. Such forms do, of course, need to be marked as being elicited rather than naturally occurring so that their status is clear. <br>The typical grammar of the past few decades makes no reference to the source of its data nor to how to access further data on the language than is included in the grammatical description. For example, was the data all elicited or was it recorded and transcribed? If it was recorded, then who was it recorded with – are the speakers old or young, male or female? If texts are the source of example sentences, then where in the text does the example come from? Where is the data itself stored? <br>A sample of some thirty grammars from that period found one (Heath <br>1984, discussed below) that provided sufficient data to allow verification </p><p><em>Steps toward a grammar embedded in data &nbsp;</em>367 </p><p>of the author’s analysis with textual data readily available to investigate features of the grammar not addressed by the grammar writer. None of them provided recordings and none provided links from example sentences to the context of the sentence, neither by provision of the media, nor by the more arduous path of spelled out timecodes to media files that may be available somewhere in the world (but not collocated with the grammatical analysis). It is appropriate to focus on the past few decades because it has been possible to provide access to textual and dynamic media recordings for most of that time, but it has not been part of normal linguistic methodology to take advantage of this possibility. Thus it is not a criticism of any one grammar to observe that it does not consider the corpus on which it is built to be a relevant part of its construction so that the whole corpus or a suitable presentation version of it should be provided to the reader. Rather, one has a sense of wonder that field linguistics as a discipline has kept going as long as it has in willful ignorance of the availability of new methods for recording, transcribing, concordancing, annotating, and presenting the data on which it bases its generalizations. These are methods with which it should be completely engaged, relishing the opportunity to access recorded textual material instantly, and to account for any small inconsistency in the data by reference to that data rather than by sidestepping one or two seemingly aberrant forms in the transcripts because of the difficulty of locating the primary media in an analog tape. Not only do these methods improve the linguist’s work, but they also create the kinds of records that speakers of the languages can reasonably expect to result from fieldwork, and that funding bodies are increasingly coming to demand of publicly funded research. </p><p><strong>2. The&nbsp;art of grammar writing in recent literature </strong></p><p>Two recent works on grammar writing summarise the state of the art, but neither considers the possibilities offered by new technologies for access to primary data. Thus, in a collection of work which details many aspects of grammar writing, Ameka, Dench, and Evans (2006) briefly discuss the issue of new technological methods for accessing data, but conclude that it means that data should be made available by a digital archive (Evans and Dench 2006: 25). Archiving data ensures its longevity; however, it is the relationship of the grammar to the data that ideally forms the basis of the analysis engaged in by the grammar writer. Archived data provides the foundation for this relationship, and archiving is a necessary but not suffi- </p><p>368 <em>Nicholas Thieberger </em></p><p>cient activity to ensure both that linguistic analysis is embedded in the data, and that there can be long-term access to the data. Mosel (2006), in the same volume, hopes that every grammar would include a text collection which: </p><p>consists of annotated digitalized recordings of different language genres (e.g. myths, anecdotes, procedural texts, casual conversation, political debates and ritual speech events), accompanied by a transcription, a translation and a commentary on the content and linguistic phenomena. (Mosel 2006: 53) </p><p>New technologies provide the means for creation of digital records in the course of linguistic fieldwork and analysis, and this requires a change in linguistic methodology, as discussed below. <br>Another recent collection of papers on grammar writing (Payne and <br>Weber 2006) makes no reference to the potential of a new kind of grammatical description interoperating with its source data via the use of new technologies, despite two chapters touching on technology in grammar writing (Weber 2006a, 2006b). Weber’s (2006b) discussion of the linguistic example likens a grammatical description to a museum of fine art, with galleries exhibiting the features of the language, so there could be a gallery of relative clauses, a gallery of noun classes and so on. In these galleries the example sentences form the exhibits. He points out, however, that a museum is not a warehouse – suggesting that data collection on its own, as advocated by some proponents of language documentation, simply results in a warehouse of recorded material, in contrast to a museum in which each item needs to be provided with interpretive material. <br>Similarly, I suggest, a grammatical description must be based on a corpus (the warehouse) of catalogued items (time-aligned transcripts, recordings, and so on), but use examples to illustrate given points within the description. To go further with the analogy, a museum provides a catalogue of the huge warehouse collection that underlies the few items displayed in a particular gallery. In most grammars to date, the example is the only language data provided, and there is no catalogue of the rest of the data, nor an indication of the relationship of the example to that data. As Weber notes, we must “keep in mind that some day the examples may be appreciated more than the author’s fine words giving some clever analysis or theory” (Weber 2006b: 446). In a similar vein, Dorian observes that: </p><p><em>Steps toward a grammar embedded in data &nbsp;</em>369 </p><p>The only real certainty about data is that you never have quite as much of it as you’ll someday wish you had. If you have an inconvenient or puzzling datum that you don’t know what to do with, it’s wise to put it in an extended footnote or into an appendix, where it remains accessible and at least marginally on the record, for your own future use or others’. (Dorian n.d.: 18) </p><p>How much more useful will it be when grammars of little-known languages provide not just decontextualised and unsourced examples, but annotated corpora linked to media? In this way the grammar provides a point of entry to a set of data and, it is to be hoped, allows new analyses to be made by other researchers who are able to locate unusual examples that eluded the initial recorder. <br>The data on which a grammatical description of a language are based are necessarily a partial set of recordings, observations and elicited forms. They are partial because we are sampling whatever we can within a short timeframe, all the more so when the work is part of a Ph.D. and so constrained to the period of a student’s candidacy. Nevertheless, the analysis of this set of data can be replicable if it is clear which elements of the data form the basis of each analytical claim. Thus, an example sentence, the archetypal ‘proof’ used in a contemporary grammar, has to bear a considerable burden; not only will it provide the authority for the current claim, but it will inevitably then be reused in other work. It is decontextualised first by the original linguist who has some knowledge of the language and the frame from which the example was taken, but is then decontextualised further by other researchers for whom it is an example of a phenomenon, regardless of the context from which it was taken. These example sentences then take on a life of their own as tokens of authenticity of a particular theoretical point, a problem that could be ameliorated if the example was properly provenanced to source data in a digital repository. <br>Why is there this lack of engagement with the possibilities offered by new technologies for grammatical descriptions? A major reason is the lack of very simple-to-use tools and hence the slow uptake of existing tools among the community of researchers. I suggest that several other reasons conspire to keep data out of grammatical descriptions. They include: inertia, the reluctance of academia to change the way it has been conducting itself; a so-called ‘theoretical linguistics’ that is satisfied with minimal data selected to prove foregone conclusions and which hence requires no large datasets (see Beaugrande 2002); a perception that the creation of a corpus on which to base claims is too time consuming and therefore cannot be part of a normal fieldwork-based linguistic investigation; a general antipathy to </p><p>370 <em>Nicholas Thieberger </em></p><p>technology among humanities researchers (but linguists have often been more technically engaged than the typical humanities scholar); a fear of presenting evidence on which claims are made and hence exposing one’s analysis to scrutiny; and, related to the last point, a desire to be the unquestioned authority whose word must be accepted in the absence of any corroborating evidence. The recent increasing interest in the use of corpora (witness the use of the World Wide Web as a corpus from which Natural Language Processing applications opportunistically harvest material with commercial implications) and in language documentation methodology (see Himmelmann 1998, several contributions in Gippert, Himmelmann and Mosel 2006, EMELD conferences 2001–2006,<sup style="top: -0.38em;">2 </sup>Austin (ed.) 2003, 2004, 2005, 2006, Woodbury 2003), has led to an opening up of discussion of a new analytical approach for field linguistics. </p><p><strong>3. What&nbsp;could an embedded grammar be? </strong></p><p>How could a grammar embedded in data be conceived? Grammars can be characterized as a range of types, beginning with those for which the source data is not mentioned through to a complex dataset in which the grammatical description forms just a part of the whole representation of the language. An ideal embedded grammar would allow the reader to move between the apparatus of the grammar and the source data, using either as a point of entry. So, from watching a video of the performance of a story and seeing its orthographic representation on the screen, the reader may want to find out about the meaning of a particular word. They will be able to link to and view the dictionary entry. They may want to know about the role of a participant and link to a discussion of arguments, or of case roles, or of the pronominal system, depending on the context of the departure point in the text. <br>Constructing such links is hugely time consuming, but allowing them to fall out naturally from a well-structured set of data is a goal that is worth pursuing. However, such data structures are not yet developed, so there are no tools made easily available to the ordinary linguist. In the meantime, we operate on the assumption that the more explicitly we can structure the data we create, the more likely it is to survive and be interoperable with emerging systems. Thus the basic desiderata for a grammar embedded in data must also conform to the principles of portability set out in Bird and Simons (2003). The process of creating such data will be discussed below. </p><p><em>Steps toward a grammar embedded in data &nbsp;</em>371 </p><p>At the more recognizable end of the spectrum of grammars, those that have been produced more recently usually include a number of example sentences. Examples are the main currency of a descriptive grammar and the usual form of argumentation is to provide some analysis of a phenomenon and then the example that illustrates its existence and its usage. However, if examples are not provenanced and their status not made clear to the reader (Is the sentence elicited? Was it part of a larger text? Who was it spoken by?) then they provide poor data for others who may want to test the analysis. <br>A first step towards an embedded grammar could be to provide links from the grammar to sources of data, establishing that examples exist in a corpus of texts, as shown by Heath in his Nunggubuyu Grammar (1984), Dictionary (1982) and Texts (1980). Cross-references in these volumes constructed by hand (that is, without using a computer to generate links in the way that we now can do) allow the reader to locate contextual information for example sentences. Heath notes that: </p><p>The standards of accuracy and documentation which I have set for myself in preparing this volume have been high, though I may not have lived up to them uniformly. In essence this is a corpus-based grammar, and my ideal has been to account for all or nearly all instances in the texts of each morpheme or other feature under consideration. (Heath 1984: 4) </p><p>Heath accordingly presents references to many examples of any morpheme discussed in his grammar, with the result that, as noted by Musgrave (2005: 113), the complexity of referencing in Heath’s typescript presentation makes it difficult to read, with a dense use of superscripts and visual references that could today be replaced by less obtrusive hyperlinks. Heath (1984: 5) discusses the need for documentation because of his own “sad experiences as a reader of other linguists’ grammars, which have almost never provided me with the information I wanted to undertake my own (re)analysis of the language in question.” Heath’s work cited but did not provide taped recordings, but an early example of the provision of textual material with a recording is Brandenstein’s (1970) text collection, which included a 45 rpm vinyl disk containing the stories which were provided in interlinear format in the book. He clearly thought it essential for a modern comprehensive linguistic work that a record should be attached to each copy of the book (Thieberger 2008: 327). Last century, when we were dealing with analog data, it could be linked from texts but it took far more effort than is required for digital recordings today. A further, but unpub- </p><p>372 <em>Nicholas Thieberger </em></p><p>lished, example of textual material accompanying a description is Nicholas Reid’s (1990) dissertation on Ngan’gityemerri, which included four audiocassettes. <br>Citation of data from primary media requires that the media have persistence, both in location and naming, so that when the speakers want access to recordings they can find them, and when a researcher wants to listen to an example sentence to find out if there is a feature there that the original work ignored, they can do that. However, such persistence is difficult, if not impossible, for the individual researcher to achieve (hence the need for linguistic digital repositories such as the DOBES archive,<sup style="top: -0.38em;">3 </sup>PARADISEC,<sup style="top: -0.38em;">4 </sup>or AILLA).<sup style="top: -0.38em;">5 </sup>The researcher may also create a website or a multimedia package in which the media is available for public use. Websites created to provide access to primary data typically do not have longevity, and issues related to delivery of media over the web mean that high-resolution primary data is not suited to web-delivery. Multimedia packages as they are currently being made have a very short life, usually relying on software that is updated every year or so, leaving orphaned earlier versions unreadable. Musgrave (2005) presents three case studies of multimedia representations of language,<sup style="top: -0.38em;">6 </sup>each of which includes little grammatical information, concentrating instead on texts linked to lexicons and media files. The presentation of linked texts, media and lexica is to be welcomed; however, it seems that, of the three case studies, it is only for Nahuatl (Amith n.d.) that the underlying data is produced in an archival form. This is important when creating multimedia products, as otherwise there is a risk that the only available data for the language will be in a form that cannot easily be archived and thus may not be available in the long-term. From the perspective of writing a grammar embedded in data and conforming to the principles of portability (mentioned earlier), these three projects are not descriptions presenting primary data but are <em>representations </em>of the data, an important distinction that must be understood in order to create wellformed primary data in an archive, with derived versions being used in multimedia representations. <br>Morey’s (2004) grammar of Tai includes a CD-ROM which contains the text of the grammar, with example sentences linked to audio files and to details about the characteristics of the speaker. In this work the links are all produced by hand and each link is to a single media file (each target utterance corresponds to a single file). Morey, in a postscript to the work (and echoing Weber’s sentiments quoted above) notes that, “in a hundred years time, though aspects of my analysis may not have stood the test of </p><p><em>Steps toward a grammar embedded in data &nbsp;</em>373 </p><p>time, I am confident that the richness of the corpus of texts will ensure that this work is still useful” (2004: 402). <br>With similar sentiments, I built a corpus of South Efate texts on which my analysis (Thieberger 2006a) of the language was based.<sup style="top: -0.38em;">7 </sup>The field recordings were not segmented into utterance-length files as a matter of principle – if they were providing contextualised data for my analysis, it would defeat the purpose to cut them into potentially decontextualized units, and, from an archival point of view, there should not be too many objects (files) to keep track of and to describe. The focus was to develop a means by which a collection of transcripts could maintain links back to the media they transcribed, allowing any subsequent mentions of parts of the texts (as in example sentences) to be cited to that media file. This then is a second step toward an embedded grammar: using a media corpus as the source, and providing the media files with names that will endure over time (also known as <em>persistent identification</em>), as provided, for example, by their being lodged in an archive. This means that any reference made to the media files will be resolvable by readers of the grammar into the future, as per Morey’s and Weber’s desiderata quoted earlier. The paper grammar, in the form of a book, has references to citable data, and may include a DVD of data (that is, an mp3 or similarly compressed form of the file suitable for delivery) derived from the archival form that is stored in an archive with a persistent identifier (a name that will not change). <br>What was required to create a corpus with examples cited to the level of words or sentences? To summarise, in my work with South Efate I created archival digital media files, paying attention to accepted standards in filenaming (in order that files have persistent identification and thus persistent location over time) and file formats (so that the files can be read over time), and depositing these files with a digital repository (with sufficient metadata to describe the content of the file). These files were then transcribed with time-alignment (using stand-off timecoding) (the process outlined here is discussed in more detail in Thieberger 2004, 2006b) facilitating links to be instantiated between any textual chunk and its media representation. Any sentence in some twenty hours of data can be clicked and heard, and accessed by a concordance (a listing of each word in the data in context). While the software for transcribing with time-alignment has been available since the late 1980s (see MacWhinney 1996), it was necessary to devise a method to access the audio and video data as a corpus in order to facilitate its analysis, as no similar approach had been developed when it was needed for this research. Happily this is no longer the case and tools are becoming easier to use and more widely adopted by lin- </p>

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    20 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us