Journal of the Text Encoding Initiative, Issue 1

Total Page:16

File Type:pdf, Size:1020Kb

Journal of the Text Encoding Initiative, Issue 1 Journal of the Text Encoding Initiative Issue 1 | June 2011 Selected Papers from the 2008 and 2009 TEI Conferences Kevin Hawkins, Malte Rehbein and Syd Bauman (dir.) Electronic version URL: http://journals.openedition.org/jtei/125 DOI: 10.4000/jtei.125 ISSN: 2162-5603 Publisher TEI Consortium Electronic reference Kevin Hawkins, Malte Rehbein and Syd Bauman (dir.), Journal of the Text Encoding Initiative, Issue 1 | June 2011, « Selected Papers from the 2008 and 2009 TEI Conferences » [Online], Online since 01 June 2011, connection on 22 May 2020. URL : http://journals.openedition.org/jtei/125 ; DOI : https:// doi.org/10.4000/jtei.125 This text was automatically generated on 22 May 2020. TEI Consortium 2011 (Creative Commons Attribution-NoDerivs 3.0 Unported License) 1 TABLE OF CONTENTS Editorial Introduction to the First Issue Susan Schreibman Guest Editors’ Note Malte Rehbein and Kevin Hawkins Computational Work with Very Large Text Collections Interoperability, Sustainability, and the TEI John Unsworth Knowledge Representation and Digital Scholarly Editions in Theory and Practice Tanya Clement A TEI-based Approach to Standardising Spoken Language Transcription Thomas Schmidt ‘The Apex of Hipster XML GeekDOM’ TEI-encoded Dylan and Understanding the Scope of an Evolving Community of Practice Lynne Siemens, Ray Siemens, Hefeng (Eddie) Wen, Cara Leitch, Dot Porter, Liam Sherriff, Karin Armstrong and Melanie Chernyk Journal of the Text Encoding Initiative, Issue 1 | June 2011 2 Editorial Introduction to the First Issue Susan Schreibman 1 On behalf of the Board of the Text Encoding Initiative and my co-editors, Markus Flatscher and Kevin Hawkins, I am delighted to announce the publication of the inaugural issue of the Journal of the Text Encoding Initiative. 2 This Journal has been nearly three years in the making. It was a natural outgrowth of the expansion of the yearly members meeting into a conference format (beginning in 2007 with the University of Maryland meeting) that regularly attracts over 100 participants. It was felt by the TEI Board that a dedicated journal would be the ideal vehicle to build on the success of the conference as well as to capture the diverse scholarly interests of an ever more vibrant TEI user community. The following year, at the London meeting, a committee (consisting of myself, Gabriel Bodard, Lou Burnard, Julianne Nyhan, and Laurent Romary) was formed to explore the best way to achieve this goal. At the 2009 meeting in Ann Arbor, a full proposal was presented to the Board. It was adopted unanimously. Thus the Journal of the Text Encoding Initiative was born. 3 The committee decided that the journal should be published as an open-access online journal with TEI as the underlying data format. Moreover the committee felt that we should, if at all possible, avoid developing a new publication system. After investigating several platforms, Revues.org with its TEI-native publishing platform was recommended to host the journal. It was also decided that we would endeavour to publish two issues a year: the autumn issue consisting of a selection of articles arising from the previous conference and the spring issue focusing on a topic of relevance to the TEI community. 4 This inaugural issue consists of papers given at the London and Ann Arbor conferences. An introduction by two of the guest editors, Malte Rehbein and Kevin Hawkins, demonstrates just how wide-ranging and diverse the interests of the community have become; these articles truly represent the broad tent that is TEI scholarship today. 5 This inaugural issue owes much to many people. Thanks are due first to the TEI Board, where the idea originated, for supporting it so wholeheartedly, and particularly to Dan Journal of the Text Encoding Initiative, Issue 1 | June 2011 3 O’Donnell, previous chair of the TEI Consortium, for his unfailing support. Thanks are also due to the committee that drew up the parameters of the Journal as well as to Revues.org for agreeing to host the Journal and for providing much aid-in-kind in its production. The editors of the Journal of the Text Encoding Initiative have benefitted immensely from the French team’s experience and expertise as well as their enthusiasm for the TEI. 6 Most of all, thanks are due to my co-editors, Markus Flatscher (Technical Editor) and Kevin Hawkins (Managing Editor), who have so generously given their time and expertise to make this project happen. No detail, large or small, has been beyond their notice. Their professionalism, attention to detail, and good humour has made seeing this issue into “print” a real pleasure. I also thank the guest editors of this first issue who admirably and with good humour suffered through our teething process as we put in place our workflows while at the same time going into production. 7 At the 2008 London meeting the TEI celebrated its 21st birthday (a traditional rite of passage in the UK and Ireland). Another rite of passage for the TEI community is this Journal, marking one of the many milestones in the TEI becoming not only “a mature organization,” as Council Chair Laurent Romary would say, but a flourishing academic and intellectual community. 8 Susan Schreibman Editor-in-Chief Trinity College Dublin AUTHOR SUSAN SCHREIBMAN [email protected] Trinity College Dublin Journal of the Text Encoding Initiative, Issue 1 | June 2011 4 Guest Editors’ Note Malte Rehbein and Kevin Hawkins 1 With this inaugural issue of the Journal of the Text Encoding Initiative, we are happy to present selected papers from TEI Conference and Members’ Meetings held in 2008 and 2009. 2 In 2007, the TEI Consortium expanded its members’ meetings to a full conference format. At the 2008 and 2009 conferences there was great variety in the topics presented and discussed among approximately 100 participants from around the world at each event, reflecting the broad range of the TEI community. While a single issue of a scholarly journal can document only a selection of the papers, posters, and demonstrations from these conferences, we believe that this selection illustrates the broad community that the TEI now represents. 3 In his contribution “Computational Work with Very Large Text Collections: Interoperability, Sustainability, and the TEI,” John Unsworth, one of the keynote speakers for the 2009 conference, directly addresses that year’s theme: text encoding in the era of mass digitization. He discusses how the “I” of “TEI” stands for both “Initiative” and “Interchange” yet argues that we need to move towards “Interoperability” as well. Analyzing large-scale digitization enterprises, Unsworth sums up with a plea for greater engagement of the TEI in the development of the Semantic Web. 4 While Unsworth is interested in a role for TEI in large text collections, Tanya Clement’s article approaches the TEI from the opposite perspective: that of a scholarly edition of a small-scale corpus of texts. In “Knowledge Representation and Digital Scholarly Editions in Theory and Practice,” she discusses the textual features and variations of a modern manuscript using selected poems by the Baroness Elsa von Freytag- Loringhoven, a German-born Dadaist artist and poet, as a case-study. Clement’s article argues for new frameworks of knowledge representation and scholarly editing to theorize the way TEI encoding and the Guidelines are used. 5 Thomas Schmidt’s article can be seen as a bridge between Unsworth’s and Clement’s approaches. “A TEI-based Approach to Standardizing Spoken Language Transcription” discusses both interoperability and scholarly practice in using the TEI Guidelines to formulate a standard for the transcription of corpora of spoken languages. Schmidt’s Journal of the Text Encoding Initiative, Issue 1 | June 2011 5 “route to standardization” combines conformance to existing principles and conventions on the one hand with interoperable encoding based on the Guidelines on the other. Schmidt also adds a third dimension to his argument and sums up with a discussion of tool development and transformation workflows. 6 One might argue that for such an endeavor to succeed, a community needs to agree on shared standards, approaches, and tools to facilitate interoperability. “The Apex of Hipster XML GeekDOM’: TEI-Encoded Dylan and Understanding the Scope of an Evolving Community of Practice,” co-authored by Lynne Siemens, Ray Siemens, and Hefeng Wen, describes the Text Encoding Initiative as it is meant in its core: as a community of practice. The viral marketing experiment described in their article not only gives insight into the diversity of TEI practitioners and practice, but also illustrates the TEI’s engagement and potential engagement with new communities. 7 Enjoy! AUTHORS MALTE REHBEIN [email protected] Julius-Maximilians-Universität Würzburg, Germany KEVIN HAWKINS [email protected] University of Michigan, Ann Arbor Journal of the Text Encoding Initiative, Issue 1 | June 2011 6 Computational Work with Very Large Text Collections Interoperability, Sustainability, and the TEI John Unsworth 1 The “I” in TEI sometimes stands for interchange, but it never stands for interoperability. Interchange is the activity of reciprocating or exchanging, especially with respect to information (according to Wordnet), or if you prefer the Oxford English Dictionary, it is “the act of exchanging reciprocally; giving and receiving with reciprocity.” It’s an old word, its existence attested as early as 1548. Interoperability is a much newer word with what appears to be
Recommended publications
  • Critical Editions and the Promise of the Digital: the Evelopmed Nt and Limitations of Markup
    Portland State University PDXScholar Book Publishing Final Research Paper English 5-2015 Critical Editions and the Promise of the Digital: The evelopmeD nt and Limitations of Markup Alexandra Haehnert Portland State University Let us know how access to this document benefits ouy . Follow this and additional works at: https://pdxscholar.library.pdx.edu/eng_bookpubpaper Part of the English Language and Literature Commons Recommended Citation Haehnert, Alexandra, "Critical Editions and the Promise of the Digital: The eD velopment and Limitations of Markup" (2015). Book Publishing Final Research Paper. 1. https://pdxscholar.library.pdx.edu/eng_bookpubpaper/1 This Paper is brought to you for free and open access. It has been accepted for inclusion in Book Publishing Final Research Paper by an authorized administrator of PDXScholar. For more information, please contact [email protected]. Critical Editions and the Promise of the Digital: The Development and Limitations of Markup Alexandra Haehnert Paper submitted in partial fulfillment of the requirements for the degree of Master of Science in Writing: Book Publishing. Department of English, Portland State University. Portland, Oregon, 13 May 2015. Reading Committee Dr. Per Henningsgaard Abbey Gaterud Adam O’Connor Rodriguez Critical Editions and the Promise of the Digital: The Development and Limitations of Markup1 Contents Introduction 3 The Promise of the Digital 3 Realizing the Promise of the Digital—with Markup 5 Computers in Editorial Projects Before the Widespread Adoption of Markup 7 The Development of Markup 8 The Text Encoding Initiative 9 Criticism of Generalized Markup 11 Coda: The State of the Digital Critical Edition 14 Conclusion 15 Works Cited i 1 The research question addressed in this paper that was approved by the reading committee on April 28, 2015, is “How has semantic markup evolved to facilitate the creation of digital critical editions, and how close has this evolution in semantic markup brought us to realizing Charles L.
    [Show full text]
  • Transformation Frameworks and Their Relevance in Universal Design
    Universal Access in the Information Society manuscript No. (will be inserted by the editor) Transformation frameworks and their relevance in universal design Silas S. Brown and Peter Robinson University of Cambridge Computer Laboratory 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK e-mail: fSilas.Brown,[email protected] Received: date / Revised version: date Category: Long Paper – Some algorithms can be simplified if the data is first transformed into a convenient structure. Many algo- Key words notations, transformation, conversion, ed- rithms can be regarded as transformations in their ucation, tools, 4DML own right. Abstract Music, engineering, mathematics, and many – Transformation can be important when presenting other disciplines have established notations for writing data to the user and when accepting user input. their documents. Adjusting these notations can con- This last point, namely the importance of transfor- tribute to universal access by helping to address access mation in user interaction, is relevant to universal design difficulties such as disabilities, cultural backgrounds, or and will be elaborated here. restrictive hardware. Tools that support the program- ming of such transformations can also assist by allowing the creation of new notations on demand, which is an 1.1 Transformation in universal design under-explored option in the relief of educational diffi- culties. Universal design aims to develop “technologies which This paper reviews some programming tools that can are accessible and usable by all citizens. thus avoid- be used to effect such transformations. It also introduces ing the need for a posteriori adaptations or specialised a tool, called “4DML”, that allows the programmer to design” [37].
    [Show full text]
  • The Text Encoding Initiative Nancy Ide
    Encoding standards for large text resources: The Text Encoding Initiative Nancy Ide LaboratoirE Parole et Langage Department of Computer Science CNRS/Universitd de Provence Vassar College 29, Avenue R.obert Schuman Poughkeepsie, New York 12601 (U.S.A.) 13621 Aix-en-Provence Cedex 1 (France) e-maih ide@cs, vassar, edu Abstract. The Text Encoding Initiative (TEl) is an considerable work to develop adequately large and international project established in 1988 to develop appropriately constituted textual resources still remains. guidelines for the preparation and interchange of The demand for extensive reusability of large text electronic texts for research, and to satisfy a broad range collections in turn requires the development of of uses by the language industries more generally. The standardized encoding formats for this data. It is no longer need for standardized encoding practices has become realistic to distribute data in ad hoc formats, since the inxreasingly critical as the need to use and, most eflbrt and resources required to clean tip and reformat the importantly, reuse vast amounts of electronic text has data for local use is at best costly, and in many cases dramatically increased for both research and industry, in prohibitive. Because much existing and potentially particular for natural language processing. In January available data was originally formatted R)r the purposes 1994, the TEl isstled its Guidelines for the Fmcoding and of printing, the information explicitly represented in the hiterehange of Machine-Readable Texts, which provide encoding concerns a imrticular physical realization of a standardized encoding conventions for a large range of text rather than its logical strttcture (which is of greater text types and features relevant for a broad range of interest for most NLP applications), and the applications.
    [Show full text]
  • Digitization for Beginners Handout
    DIGITIZATION FOR BEGINNERS SimpleScan Station Scans photos and documents to USB and email Extremely easy to use Located at the Second Floor and Kids department Copy time: Approximately 5 seconds per page Vinyl to MP3 Located in the Creative Studio Room B Copy time: Length of Record Cassette to MP3 Located in the Creative Studio Room B More difficult to use Copy time: Length of Cassette VHS to DVD Located in the computer commons area Three check-out converters available for home use Copy time: Length of VHS DVD to DVD Located near the copiers Extremely easy to use Cannot copy copyrighted DVDs Copy time: -2 7 min Negative/Slide Scanner Located at Creative Studio Copy time: a few seconds per slide/negative 125 S. Prospect Avenue, Elmhurst, IL 60126 Start Using Computers, (630) 279-8696 ● elmhurstpubliclibrary.org Tablets, and Internet SCANNING BASICS BookScan Station Easy-to-use touch screen; 11 x 17 scan bed Scan pictures, documents, books to: USB, FAX, Email, Smart Phone/Tablet or GoogleDrive (all but FAX are free*). Save scans as: PDF, Searchable PDF, Word Doc, TIFF, JPEG Color, Grayscale, Black and White Standard or High Quality Resolution 5 MB limit on email *FAX your scan for a flat rate: $1 Domestic/$5 International Flat-Bed Scanner Available in public computer area and Creative Studios Control settings with provided graphics software Scan documents, books, pictures, negatives and slides Save as PDF, JPEG, TIFF and other format Online Help – files.support.epson.com/htmldocs/prv3ph/ prv3phug/index.htm Copiers Available on Second Floor and Kid’s Library Scans photos and documents to USB Saves as PDF, Tiff, or JPEG Great for multi-page documents 125 S.
    [Show full text]
  • Video Digitization and Editing: an Overview of the Process
    DESIDOC Bulletin of Information Technology , Vol. 22, No. 4 & 5, July & September 2002, pp. 3-8 © 2002, DESIDOC Video Digitization and Editing: An Overview of the Process Vinod Kumari Sharma, RK Bhatnagar & Dipti Arora Abstract Digitizing video is a complex process. Many factors affect the quality of the resulting digital video, including: quality of source video (recording equipment, video formats, lighting, etc.), equipment used for digitization, and application(s) used for editing and compressing digital movies. In this article, an attempt is made to outline the various steps taken for video digitization followed by the basic infrastructure required to create such facility in-house. 1. INTRODUCTION 2. ADVANTAGES OF DIGITIZATION Technology never stops from moving A digital video movie consists of a number forward and improving. The future of media is of still pictures ordered sequentially one after constantly moving towards the digital world. another (like analogue video). The quality and Digital environment is becoming an integral playback of a digital video are influenced by a part of our life. Media archival; analogue number of factors, including the number of audio/video recordings, print media, pictures (or frames per second) contained in photography, microphotography, etc. are the video, the degree of change between slowly but steadily transforming into digital video frames, and the size of the video frame, formats and available in the form of audio etc. The digitization process, at various CDs, VCDs, DVDs, etc. Besides the stages, generally provides a number of numerous advantages of digital over parameters that allow one to manipulate analogue form, the main advantage is that various aspects of the video in order to gain digital media is user friendly in handling and the highest quality possible.
    [Show full text]
  • Introduction to Digitization
    IntroductionIntroduction toto Digitization:Digitization: AnAn OverviewOverview JulyJuly 1616thth 2008,2008, FISFIS 2308H2308H AndreaAndrea KosavicKosavic DigitalDigital InitiativesInitiatives Librarian,Librarian, YorkYork UniversityUniversity IntroductionIntroduction toto DigitizationDigitization DigitizationDigitization inin contextcontext WhyWhy digitize?digitize? DigitizationDigitization challengeschallenges DigitizationDigitization ofof imagesimages DigitizationDigitization ofof audioaudio DigitizationDigitization ofof movingmoving imagesimages MetadataMetadata TheThe InuitInuit throughthrough MoravianMoravian EyesEyes DigitizationDigitization inin ContextContext http://www.jisc.ac.uk/media/documents/programmes/preservation/moving_images_and_sound_archiving_study1.pdf WhyWhy Digitize?Digitize? ObsolescenceObsolescence ofof sourcesource devicesdevices (for(for audioaudio andand movingmoving images)images) ContentContent unlockedunlocked fromfrom aa fragilefragile storagestorage andand deliverydelivery formatformat MoreMore convenientconvenient toto deliverdeliver MoreMore easilyeasily accessibleaccessible toto usersusers DoDo notnot dependdepend onon sourcesource devicedevice forfor accessaccess MediaMedia hashas aa limitedlimited lifelife spanspan DigitizationDigitization limitslimits thethe useuse andand handlinghandling ofof originalsoriginals WhyWhy Digitize?Digitize? DigitizedDigitized itemsitems moremore easyeasy toto handlehandle andand manipulatemanipulate DigitalDigital contentcontent cancan bebe copiedcopied
    [Show full text]
  • Information Literacy and the Future of Digital Information Services at the University of Jos Library Vicki Lawal [email protected]
    University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Library Philosophy and Practice (e-journal) Libraries at University of Nebraska-Lincoln Winter 11-11-2017 Information Literacy and the Future of Digital Information Services at the University of Jos Library Vicki Lawal [email protected] Follow this and additional works at: https://digitalcommons.unl.edu/libphilprac Part of the Collection Development and Management Commons, and the Information Literacy Commons Lawal, Vicki, "Information Literacy and the Future of Digital Information Services at the University of Jos Library" (2017). Library Philosophy and Practice (e-journal). 1674. https://digitalcommons.unl.edu/libphilprac/1674 Table of contents 1. Introduction 1.1 Information Literacy (IL): Definition and context 1.2. IL and the current digital environment 2. University of Jos Library: Digital context 2.1. Literature review 3. Research design and methodology 3.1. Data presentation 3.2. Discussion of findings 4. Conclusion and recommendations 1 Information Literacy and the Future of Digital Information Services at the University of Jos Library Abstract This paper highlights current developments in digital information resources at the University of Jos Library. It examines some of the new opportunities and challenges in digital information services presented by the changing context with respect to Information Literacy and the need for digital information literacy skills training. A case study method was employed for the study; data was collected through the administration of structured questionnaires to the study population. Findings from the study provide relevant policy considerations in digital Information Literacy practices for academic libraries in Nigeria who are going digital in their services.
    [Show full text]
  • SGML As a Framework for Digital Preservation and Access. INSTITUTION Commission on Preservation and Access, Washington, DC
    DOCUMENT RESUME ED 417 748 IR 056 976 AUTHOR Coleman, James; Willis, Don TITLE SGML as a Framework for Digital Preservation and Access. INSTITUTION Commission on Preservation and Access, Washington, DC. ISBN ISBN-1-887334-54-8 PUB DATE 1997-07-00 NOTE 55p. AVAILABLE FROM Commission on Preservation and Access, A Program of the Council on Library and Information Resources, 1400 16th Street, NW, Suite 740, Washington, DC 20036-2217 ($20). PUB TYPE Reports Evaluative (142) EDRS PRICE MF01/PC03 Plus Postage. DESCRIPTORS *Access to Information; Computer Oriented Programs; *Electronic Libraries; *Information Retrieval; Library Automation; Online Catalogs; *Preservation; Standards IDENTIFIERS Digital Technology; *SGML ABSTRACT This report explores the suitability of Standard Generalized Markup Language (SGML) as a framework for building, managing, and providing access to digital libraries, with special emphasis on preservation and access issues. SGML is an international standard (ISO 8879) designed to promote text interchange. It is used to define markup languages, which can then encode the logical structure and content of any so-defined document. The connection between SGML and the traditional concerns of preservation and access may not be immediately apparent, but the use of descriptive markup tools such as SGML is crucial to the quality and long-term accessibility of digitized materials. Beginning with a general exploration of digital formats for preservation and access, the report provides a staged technical tutorial on the features and uses of SGML. The tutorial covers SGML and related standards, SGML Document Type Definitions in current use, and related projects now under development. A tiered metadata model is described that could incorporate SGML along with other standards to facilitate discovery and retrieval of digital documents.
    [Show full text]
  • Digitization Procedure (Video Tapes) 1. Connection. 2. Create New Project with Imovie 3. Import Contents from the Tape to Imovi
    Digitization Procedure (video tapes) 1. Connection. Connect video player to the magical box, the magical box to the computer. (It should be connected when you start working on it.) 2. Create new project with iMovie Open iMovie, click FILE – NEW PROJECT, name the project, and select where you want to save it. The default is HD-BAYLOR-MOVIES but since the space of this HD is running out, you have to change it to the BACKUP HD. Then click ok. Note: When iMovie starts, it is under IMPORT mode and therefore there is an IMPORT button. When you start editing the clips, it changes to EDIT mode. If you want to continue to import from videotape after editing clips, you have to switch the EDIT mode back to IMPORT mode so that the IMPORT button reappears. The switch is to the left of the REWIND button. 3. Import contents from the tape to iMovie Play the video and click IMPORT on the iMovie screen. If you can’t find the IMPORT button, it is because the program is under EDIT mode. Switch to IMPORT mode. 4. Where those clips go When digitizing, all the clips will be put on the panes to the right of the screen. Each clip is no more than 10 minutes long and iMovie automatically create a new clip next to the previous one when it reaches the time limit. All the clips are organized chronologically; so you don’t have to rearrange them. 5. What to do when the tape is over or you want to take a break Stop the VCR first when it is over or when you want to take a break from the project; stop importing to iMovie by clicking the button with a square under the IMPORT button.
    [Show full text]
  • A Standard Format Proposal for Hierarchical Analyses and Representations
    A standard format proposal for hierarchical analyses and representations David Rizo Alan Marsden Department of Software and Computing Lancaster Institute for the Contemporary Arts, Systems, University of Alicante Lancaster University Instituto Superior de Enseñanzas Artísticas de la Bailrigg Comunidad Valenciana Lancaster LA1 4YW, UK Ctra. San Vicente s/n. E-03690 [email protected] San Vicente del Raspeig, Alicante, Spain [email protected] ABSTRACT portant place in historical research, in pedagogy, and even in such In the realm of digital musicology, standardizations efforts to date humble publications as concert programme notes. have mostly concentrated on the representation of music. Anal- Standard text is generally a crucial part of music analyses, often yses of music are increasingly being generated or communicated augmented by specialist terminology, and frequently accompanied by digital means. We demonstrate that the same arguments for the by some form of diagram or other graphical representation. There desirability of standardization in the representation of music apply are well established ways of representing texts in digital form, but also to the representation of analyses of music: proper preservation, not for the representation of analytical diagrams. In some cases sharing of data, and facilitation of digital processing. We concen- the diagrams are very like music notation, with just the addition of trate here on analyses which can be described as hierarchical and textual labels, brackets or a few other symbols (see Figure 1). In show that this covers a broad range of existing analytical formats. other cases they are more elaborate, and might not make direct use We propose an extension of MEI (Music Encoding Initiative) to al- of musical notation at all.
    [Show full text]
  • Syllabus FREN379 Winter 2020
    Department of French and Italian Studies University of Washington Winter 2020 FRENCH 379 Eighteenth-Century France Through Digital Archives and Tools Tues, Thurs 1:30-3:20 Denny 159 Geoffrey Turnovsky ([email protected]) Padelford C-255; 685-1618 Office Hours: M 1-3pm and by appointment Description. The last decade or two has witnessed a huge migration of texts and data onto digital platforms, where they can be accessed, in many cases, by anyone anywhere. This is a terrific benefit to students and teachers alike, who otherwise wouldn't be able to consult these materials; and it has transformed the kind of work and research we can do in the French program and in the Humanities. We can now discover obscure, archival documents which we would never have been able to find in the past. And we can look at classic works in their original forms, rather than in contemporary re-editions that often change and modernize the works. Yet this ease of access brings challenges: to locate these resources on the web, to assess their quality and reliability, and to understand how to use them, as primary sources and “data”, and as new research technologies. The PDF of a first edition downloaded through Google Books certainly looks like the historical printed book it reproduces; but it is not that printed book. It is a particular image of one copy of it, created under certain conditions and it can be a mistake to forget the difference. In this course, we'll explore a variety of digital archives, databases and tools that are useful for studying French cultural history.
    [Show full text]
  • Teaching Process Writing Using Computers for Intermediate Students
    California State University, San Bernardino CSUSB ScholarWorks Theses Digitization Project John M. Pfau Library 1997 Teaching process writing using computers for intermediate students Darci Jo Slocum Follow this and additional works at: https://scholarworks.lib.csusb.edu/etd-project Part of the Instructional Media Design Commons Recommended Citation Slocum, Darci Jo, "Teaching process writing using computers for intermediate students" (1997). Theses Digitization Project. 1373. https://scholarworks.lib.csusb.edu/etd-project/1373 This Project is brought to you for free and open access by the John M. Pfau Library at CSUSB ScholarWorks. It has been accepted for inclusion in Theses Digitization Project by an authorized administrator of CSUSB ScholarWorks. For more information, please contact [email protected]. TEACHINGPROCESS WRITING USINGCOMPUTERS FORINTERMEDIATE STUDENTS A Project Presented to the Faculty of California State University, San Bernardino In Partial Fulfillment ofthe Requirements for the Degree Master ofArts in Education by Darci Jo Slocum September 1997 Calif. State University, San Bernardino Librarv TEACHING PROCESS WRITING USING COMPUTERS FOR INTERMEDIATE STUDENTS A Project Presented to the Faculty of California State University, San Bernardino by Darci Jo Slocum September 1997 Approved by: r ]V|ary^ ^mllings. First Read^ 'Date Dr. Rd'wena Santiago, Second Reader C»W. State Uafvetaltx. San S.rnartlino LFbrar ABSTRACT Statement ofthe Problem The purpose ofthis project wasto demonstrate the importance ofwriting to learning and how computers can help elementary teachers effectively teach writing. Basic writing skills are not being met in the majority oftoday's schools. The ability to write well is vital in our society. So too is the use oftechnology.
    [Show full text]