Manual Archivista 2009/I

Total Page:16

File Type:pdf, Size:1020Kb

Manual Archivista 2009/I Manual Archivista 2009/I c 18th January 2009 by Archivista GmbH, CH-8118 Pfaffhausen Web pages: www.archivista.ch Contents I Introduction 8 4.4 Accessing the manual . 26 4.5 Login WebClient . 26 1 Introduction 9 4.6 Scanning and entering keywords . 26 1.1 Welcome to Archivista . 9 4.7 Rotating pages . 26 1.2 Notes on the manual . 9 4.8 Title search . 27 1.3 Our address . 9 4.9 Full text search . 27 1.4 Previous versions . 9 4.10 Login WebAdmin . 27 1.5 Licensing . 12 4.11 Adding users . 27 4.12 Adding/deleting fields . 27 2 First Steps 18 4.13 Editing the input/search mask . 27 2.1 Introduction . 18 4.14 Activating SSH . 27 2.2 The digital archive . 18 4.15 Activating VNC . 27 2.3 Database, server and client . 18 4.16 Enabling print server (CUPS) . 28 2.4 Tables, records and fields . 19 4.17 Password, Unlock & Restart OCR . 28 2.5 Archivista and working method . 19 4.18 Activating HTTPS . 28 2.6 Tips for archiving . 20 2.7 Archive, pages and documents . 20 5 Tutorial RichClient 29 2.8 The Archivista document . 20 5.1 Archivista in 90 Seconds . 29 5.2 Adding Documents . 29 3 Installation 22 5.3 Search . 29 3.1 ArchivistaBox . 22 5.4 Extended Functions . 30 3.2 Virtual (Box) . 22 5.5 Users & Fields . 31 3.3 OpenSource (Box) . 22 5.6 Databases, fields and barcodes . 31 3.4 OpenSource (Windows) . 22 III ArchivistaBox 33 II Tutorials 25 6 Introduction 34 4 Introduction 26 6.1 Advantages . 34 4.1 How to use the online tutorials . 26 6.2 DOLDER model . 34 4.2 Installation ArchivistaBox . 26 6.3 RIGI and SAENITS model . 34 4.3 Update an ArchivistaBox . 26 6.4 PILATUS model . 35 2 • Contents Archivista 6.5 TITLIS model . 35 9.2 Views . 76 6.6 EIGER model . 36 6.7 MYTHEN model . 36 10 Navigation 78 6.8 ROTHORN model . 36 10.1 Main view . 78 10.2 Sorting . 78 7 Initial start-up 37 10.3 Page view . 79 7.1 Introduction . 37 7.2 DOLDER, RIGI and SAENTIS . 37 11 View, Search and Edit 80 7.3 PILATUS . 37 11.1 View (Ctrl+F5) . 80 7.4 TITLIS and EIGER . 37 11.2 Search . 80 7.5 Switching it on . 38 11.3 Edit mode . 82 8 System settings 42 12 Extended functions 86 8.1 Archivista WebClient (F10) . 42 12.1 Uploading a file . 86 8.2 Archivista WebAdmin (Alt+F9) . 42 12.2 Creating and scanning documents . 86 8.3 Archivista WebConfig . 42 12.3 Editing and deleting pages . 87 8.4 View manuals . 42 12.4 Download . 87 8.5 Archiving & OCR . 43 12.5 Printing pages . 88 8.6 Archivista modules . 46 12.6 Barcode processing . 88 8.7 ArchivistaERP . 49 8.8 Backup . 49 8.9 Encryption . 51 V WebAdmin 89 8.10 Print server . 54 8.11 FTP Server . 56 13 Login and Logout 90 8.12 Mail server . 59 13.1 Login . 90 8.13 Database . 60 13.2 Logout . 90 8.14 System . 64 8.15 Remote Access . 73 14 User 91 8.16 Exit (Alt+F4) . 73 14.1 Administration (in general) . 91 8.17 Function keys . 73 14.2 User administration (external) . 94 15 Field definition 97 IV WebClient 75 15.1 Field name . 97 15.2 Field type . 97 9 User Manual WebClient 2009/I 76 15.3 Length . 97 9.1 Login . 76 15.4 Position after . 97 Version 2009/I Contents • 3 16 Mask definition 98 22 Form Recognition (ArchivistaBox) 126 16.1 Mask . 98 22.1 Introduction . 126 16.2 Field name . 98 22.2 Managing forms . 126 16.3 Field type . 98 22.3 Editing objects of currently active definition . 127 16.4 Link with field . 101 22.4 Logo recognition . 130 16.5 Label . 101 22.5 Summary . 132 16.6 Position . 101 16.7 Width . 101 23 Exporting documents 133 16.8 User(s) allowed new entries . 101 23.1 Introduction . 133 16.9 User(s) allowed changes . 102 23.2 Exporting files in WebAdmin . 133 23.3 Exporting files in the WebClient . 133 17 Archive administration 103 23.4 Exporting files in the RichClient . 133 17.1 The first options . 103 17.2 Further options . 105 24 Mail archiving 134 24.1 Introduction . 134 18 Scan definitions 108 24.2 Individual definitions . 134 18.1 Introduction . 108 24.3 Editing a definition . 134 18.2 General settings . 108 24.4 Possible problems during setup . 135 18.3 Settings . 108 18.4 Post editing . 110 25 Database creation 136 25.1 Option ’create’ . 136 19 Barcodes (Archivista Box) 113 25.2 Option ’drop’ . 136 19.1 General . 113 19.2 Barcode technology . 113 19.3 Barcode entry . 114 VI WebConfig 137 19.4 Barcode recognition . 120 19.5 Barcode processing . 121 26 Administration with WebConfig 138 26.1 Introduction . 138 20 OCR definitions 123 26.2 Login . 138 20.1 Name of OCR page definition . 123 26.3 Current Settings . 138 20.2 Languages of definition . 123 26.4 Setup scan button . 139 20.3 Text quality of pages . 123 26.5 Backup . 139 20.4 Further preparatory settings . 123 26.6 Services . 140 20.5 Options for table recognition . 123 26.7 Unlock documents . 140 26.8 Passwords ArchivistaBox . 140 21 SQL definitions 125 26.9 Viewing log files . 141 4 • Contents Archivista 26.10Text Recognition (OCR) . 141 29.6 Using the right mouse button . 157 26.11Turn off ArchivistaBox . 141 29.7 Special fields . 157 29.8 Additional fields in table ’Archive’ . 157 VII WebERP 142 30 Menu Database 158 30.1 Printing (Ctrl+P) . 158 27 ArchivistaERP 143 30.2 Working with databases . 158 27.1 Introduction . 143 30.3 Importing and exporting documents . 159 27.2 Activate ERP system . 143 30.4 Hyperlinks (links between records) . 160 27.3 First Step . ..
Recommended publications
  • OCR Pwds and Assistive Qatari Using OCR Issue No
    Arabic Optical State of the Smart Character Art in Arabic Apps for Recognition OCR PWDs and Assistive Qatari using OCR Issue no. 15 Technology Research Nafath Efforts Page 04 Page 07 Page 27 Machine Learning, Deep Learning and OCR Revitalizing Technology Arabic Optical Character Recognition (OCR) Technology at Qatar National Library Overview of Arabic OCR and Related Applications www.mada.org.qa Nafath About AboutIssue 15 Content Mada Nafath3 Page Nafath aims to be a key information 04 Arabic Optical Character resource for disseminating the facts about Recognition and Assistive Mada Center is a private institution for public benefit, which latest trends and innovation in the field of Technology was founded in 2010 as an initiative that aims at promoting ICT Accessibility. It is published in English digital inclusion and building a technology-based community and Arabic languages on a quarterly basis 07 State of the Art in Arabic OCR that meets the needs of persons with functional limitations and intends to be a window of information Qatari Research Efforts (PFLs) – persons with disabilities (PWDs) and the elderly in to the world, highlighting the pioneering Qatar. Mada today is the world’s Center of Excellence in digital work done in our field to meet the growing access in Arabic. Overview of Arabic demands of ICT Accessibility and Assistive 11 OCR and Related Through strategic partnerships, the center works to Technology products and services in Qatar Applications enable the education, culture and community sectors and the Arab region. through ICT to achieve an inclusive community and educational system. The Center achieves its goals 14 Examples of Optical by building partners’ capabilities and supporting the Character Recognition Tools development and accreditation of digital platforms in accordance with international standards of digital access.
    [Show full text]
  • Reconocimiento De Escritura Lecture 4/5 --- Layout Analysis
    Reconocimiento de Escritura Lecture 4/5 | Layout Analysis Daniel Keysers Jan/Feb-2008 Keysers: RES-08 1 Jan/Feb-2008 Outline Detection of Geometric Primitives The Hough-Transform RAST Document Layout Analysis Introduction Algorithms for Layout Analysis A `New' Algorithm: Whitespace Cuts Evaluation of Layout Analyis Statistical Layout Analysis OCR OCR - Introduction OCR fonts Tesseract Sources of OCR Errors Keysers: RES-08 2 Jan/Feb-2008 Outline Detection of Geometric Primitives The Hough-Transform RAST Document Layout Analysis Introduction Algorithms for Layout Analysis A `New' Algorithm: Whitespace Cuts Evaluation of Layout Analyis Statistical Layout Analysis OCR OCR - Introduction OCR fonts Tesseract Sources of OCR Errors Keysers: RES-08 3 Jan/Feb-2008 Detection of Geometric Primitives some geometric entities important for DIA: I text lines I whitespace rectangles (background in documents) Keysers: RES-08 4 Jan/Feb-2008 Outline Detection of Geometric Primitives The Hough-Transform RAST Document Layout Analysis Introduction Algorithms for Layout Analysis A `New' Algorithm: Whitespace Cuts Evaluation of Layout Analyis Statistical Layout Analysis OCR OCR - Introduction OCR fonts Tesseract Sources of OCR Errors Keysers: RES-08 5 Jan/Feb-2008 Hough-Transform for Line Detection Assume we are given a set of points (xn; yn) in the image plane. For all points on a line we must have yn = a0 + a1xn If we want to determine the line, each point implies a constraint yn 1 a1 = − a0 xn xn Keysers: RES-08 6 Jan/Feb-2008 Hough-Transform for Line Detection The space spanned by the model parameters a0 and a1 is called model space, parameter space, or Hough space.
    [Show full text]
  • An Accuracy Examination of OCR Tools
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-8, Issue-9S4, July 2019 An Accuracy Examination of OCR Tools Jayesh Majumdar, Richa Gupta texts, pen computing, developing technologies for assisting Abstract—In this research paper, the authors have aimed to do a the visually impaired, making electronic images searchable comparative study of optical character recognition using of hard copies, defeating or evaluating the robustness of different open source OCR tools. Optical character recognition CAPTCHA. (OCR) method has been used in extracting the text from images. OCR has various applications which include extracting text from any document or image or involves just for reading and processing the text available in digital form. The accuracy of OCR can be dependent on text segmentation and pre-processing algorithms. Sometimes it is difficult to retrieve text from the image because of different size, style, orientation, a complex background of image etc. From vehicle number plate the authors tried to extract vehicle number by using various OCR tools like Tesseract, GOCR, Ocrad and Tensor flow. The authors in this research paper have tried to diagnose the best possible method for optical character recognition and have provided with a comparative analysis of their accuracy. Keywords— OCR tools; Orcad; GOCR; Tensorflow; Tesseract; I. INTRODUCTION Optical character recognition is a method with which text in images of handwritten documents, scripts, passport documents, invoices, vehicle number plate, bank statements, Fig.1: Functioning of OCR [2] computerized receipts, business cards, mail, printouts of static-data, any appropriate documentation or any II. OCR PROCDURE AND PROCESSING computerized receipts, business cards, mail, printouts of To improve the probability of successful processing of an static-data, any appropriate documentation or any picture image, the input image is often ‘pre-processed’; it may be with text in it gets processed and the text in the picture is de-skewed or despeckled.
    [Show full text]
  • Integral Estimation in Quantum Physics
    INTEGRAL ESTIMATION IN QUANTUM PHYSICS by Jane Doe A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mathematical Physics Department of Mathematics The University of Utah May 2016 Copyright c Jane Doe 2016 All Rights Reserved The University of Utah Graduate School STATEMENT OF DISSERTATION APPROVAL The dissertation of Jane Doe has been approved by the following supervisory committee members: Cornelius L´anczos , Chair(s) 17 Feb 2016 Date Approved Hans Bethe , Member 17 Feb 2016 Date Approved Niels Bohr , Member 17 Feb 2016 Date Approved Max Born , Member 17 Feb 2016 Date Approved Paul A. M. Dirac , Member 17 Feb 2016 Date Approved by Petrus Marcus Aurelius Featherstone-Hough , Chair/Dean of the Department/College/School of Mathematics and by Alice B. Toklas , Dean of The Graduate School. ABSTRACT Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah.
    [Show full text]
  • JETIR Research Journal
    © 2019 JETIR June 2019, Volume 6, Issue 6 www.jetir.org (ISSN-2349-5162) IMAGE TEXT CONVERSION FROM REGIONAL LANGUAGE TO SPEECH/TEXT IN LOCAL LANGUAGE 1Sayee Tale,2Manali Umbarkar,3Vishwajit Singh Javriya,4Mamta Wanjre 1Student,2Student,3Student,4Assistant Professor 1Electronics and Telecommunication, 1AISSMS, Institute of Information Technology, Pune, India. Abstract: The drivers who drive in other states are unable to recognize the languages on the sign board. So this project helps them to understand the signs in different language and also they will be able to listen it through speaker. This paper describes the working of two module image processing module and voice processing module. This is done majorly using Raspberry Pi using the technology OCR (optical character recognition) technique. This system is constituted by raspberry Pi, camera, speaker, audio playback module. So the system will help in decreasing the accidents causes due to wrong sign recognition. Keywords: Raspberry pi model B+, Tesseract OCR, camera module, recording and playback module,switch. 1. Introduction: In today’s world life is too important and one cannot loose it simply in accidents. The accident rates in today’s world are increasing day by day. The last data says that 78% accidents were because of driver’s fault. There are many faults of drivers and one of them is that they are unable to read the signs and instructions written on boards when they drove into other states. Though the instruction are for them only but they are not able to make it. So keeping this thing in mind we have proposed a system which will help them to understand the sign boards written in regional language.
    [Show full text]
  • Representation of Web Based Graphics and Equations for the Visually Impaired
    TH NATIONAL ENGINEERING CONFERENCE 2012, 18 ERU SYMPOSIUM, FACULTY OF ENGINEERING, UNIVERSITY OF MORATUWA, SRI LANKA Representation of Web based Graphics and Equations for the Visually Impaired C.L.R. Gunawardhana, H.M.M. Hasanthika, T.D.G.Piyasena,S.P.D.P.Pathirana, S. Fernando, A.S. Perera, U. Kohomban Abstract With the extensive growth of technology, it is becoming prominent in making learning more interactive and effective. Due to the use of Internet based resources in the learning process, the visually impaired community faces difficulties. In this research we are focusing on developing an e-Learning solution that can be accessible by both normal and visually impaired users. Accessibility to tactile graphics is an important requirement for visually impaired people. Recurrent expenditure of the printers which support graphic printing such as thermal embossers is beyond the budget for most developing countries which cannot afford such a cost for printing images. Currently most of the books printed using normal text Braille printers ignore images in documents and convert only the textual part. Printing images and equations using normal text Braille printers is a main research area in the project. Mathematical content in a forum and simple images such as maps in a course page need to be made affordable using the normal text Braille printer, as these functionalities are not available in current Braille converters. The authors came up with an effective solution for the above problems and the solution is presented in this paper. 1 1. Introduction In order to images accessible to visually impaired people the images should be converted into a tactile In this research our main focus is to make e-Learning format.
    [Show full text]
  • Design and Implementation of a System for Recording and Remote Monitoring of a Parking Using Computer Vision and Ip Surveillance Systems
    VOL. 11, NO. 24, DECEMBER 2016 ISSN 1819-6608 ARPN Journal of Engineering and Applied Sciences ©2006-2016 Asian Research Publishing Network (ARPN). All rights reserved. www.arpnjournals.com DESIGN AND IMPLEMENTATION OF A SYSTEM FOR RECORDING AND REMOTE MONITORING OF A PARKING USING COMPUTER VISION AND IP SURVEILLANCE SYSTEMS Albeiro Cortés Cabezas, Rafael Charry Andrade and Harrinson Cárdenas Almario Department of Electronic Engineering, Surcolombiana University Grupo de Tratamiento de Señales y Telecomunicaciones, Pastrana Av. Neiva Colombia, Huila E-Mail: [email protected] ABSTRACT This article presents the design and implementation of a system for detection of license plates for a public parking located at the municipality of Altamira at the state of Huila in Colombia. The system includes also, a module of surveillance cameras for remote monitoring. The detection system consists of three steps: the first step consists in locating and cutting the vehicle license plate from an image taken by an IP camera. The second step is an optical character recognition (OCR), responsible for identifying alphanumeric characters included in the license plate obtained in the first step. And the third step consists in the date and time register operation for the vehicles entry and exit. This data base stores also the license plate number, information about the contract between parking and vehicle owner and billing. Keywords: image processing, vision, monitoring, surveillance, OCR. 1. INTRODUCTION vehicles in a parking and store them in digital way to gain In recent years the automatic recognition plates control of the activity in the parking, minimizing errors numbers technology -ANPR- (Automatic Number Plate that entail manual registration and storage of information Recognition) has had a great development in congestion in books accounting and papers.
    [Show full text]
  • Optical Character Recognition Using Android
    International Journal of Computer Networking, Wireless and Mobile Communications (IJCNWMC) ISSN 2250-1568 Vol. 3, Issue 3, Aug 2013, 57-62 © TJPRC Pvt. Ltd. OPTICAL CHARACTER RECOGNITION USING ANDROID BHAKTI KARANI1, PARITA SANGHAVI2, HARSHITA BHALAKIYA3 & VINAYA SAWANT4 1,2,3B.E.,Information Technology, D. J. Sanghvi College of Engineering, Mumbai, Maharashtra, India 4Assistant Professor, Information Technology, D. J. Sanghvi College of Engineering, Mumbai, Maharashtra, India ABSTRACT The paper deals with the development of a mobile application based on Android platform for capturing digital photography and its subsequent processing by OCR (Optical Character Recognition) technologies. The developed solution provides capability of calculating equations and getting synonyms of difficult words. KEYWORDS: ABBYY’s Engine, Android OS, Optical Character Recognition (OCR) INTRODUCTION In today's world of automation, time is equivalent to money. Doing the work on the go is the need of the hour. The major problem these days arises in the organization is, for performing even the small task the user has to input the data manually into the corresponding software on the computer. This becomes very time consuming and tedious task for the user. Results for strategic decisions are required every now and then, but, on the other hand, it takes a long time for the user to obtain the results. Archaic aims on solving all the above problems on the go by just using their mobile phones. The project aims at using scanning and comparing techniques intended to identify printed text and equations into a digital format, in other words, getting answer of equation and meaning of words. In comparison with other calculation software’s that are available for PCs and mobiles, Archaic has the potential to give the output just by scanning the data taken as input in the form of equations or words from a mobile camera rather than making the user input the data manually.
    [Show full text]
  • The Common OCR Service Interface Rev 4 (2 Apr 2016) – © Sylvain Giroudon, 2007-2016
    COSI – The Common OCR Service Interface Rev 4 (2 apr 2016) – © Sylvain Giroudon, 2007-2016 What is COSI COSI stands for « Common OCR Service Interface ». COSI is an open standard targeted at using different OCR (Optical Character Recognition) programs as external agents in a unified manner. COSI allows to integrate an OCR tool into various top-level applications using a client-server approach. COSI has been successfully implemented and experimented on well-known open source OCR applications: – GOCR (http://jocr.sourceforge.net); – Tesseract-OCR (http://code.google.com/p/tesseract-ocr). – GNU Ocrad (http://www.gnu.org/software/ocrad). COSI implementations are available from the GitHub repository, at github.com/testfarm/cosi. Where does COSI come from ? The primary goal of COSI is to implement OCR capability for the “TestFarm Virtual User” software platform, a test automation tool that detects and tracks graphical objects displayed by an application. Please refer to the web site http://www.testfarm.org for further details about this tool. COSI Overview The main principles that characterize COSI are: – We use use existing OCR tools as local services to extract text from a graphical frame buffer, rather than from a static image file. An OCR tool that supports COSI is then called the “OCR server”. – The OCR server takes its graphical input from a frame buffer mapped in a shared memory segment. This shared frame buffer is written by the client and read by the OCR server. – The OCR server accepts requests from the client through its standard input. A request is a simple command line, which may include some options such as the coordinates of the graphical window from which the text is extracted (a.k.a Region of Interest).
    [Show full text]
  • Comparison of Optical Character Recognition (OCR) Software
    Angelica Gabasio May 2013 Comparison of optical character recognition (OCR) software Master's thesis work carried out at Sweco Position. Supervisors Bj¨ornHarrtell, Sweco Tobias Lennartsson, Sweco Examiner Jacek Malec, Lund University The purpose of the thesis was to test some different optical char- acter recognition (OCR) software to find out which one gives the best output. Different kinds of images have been OCR scanned to test the software. The output from the OCR tools has been automatically compared to the correct text and the error percentage has been cal- culated. Introduction Optical character recognition (OCR) is used to extract plain text from images containing text. This is useful when information in paper form is going to be digitized. OCR is applied to the images to make the text editable or searchable. Another way of using OCR scanning is to process forms automatically.[1] If the output from the OCR tool contains many errors, it would require a lot of manual work to fix them. This is why it is important to get a correct or almost correct output when using OCR. The output from different OCR tools may differ a lot, and the purpose of this comparison is to find out which OCR software to use for OCR scanning. OCR steps As seen in Figure 1, the OCR software takes an image, applies OCR on it and outputs plain text. Figure 1 also shows the basic steps of the OCR algorithm, which are explained below[1, 2]: Preprocessing Preprocessing can be done either manually before the image is given to the OCR tool, or it can be done internally in the software.
    [Show full text]
  • Ocrdroid: a Framework to Digitize Text Using Mobile Phones
    OCRdroid: A Framework to Digitize Text Using Mobile Phones Anand Joshiy, Mi Zhangx, Ritesh Kadmawalay, Karthik Dantuy, Sameera Poduriy and Gaurav S. Sukhatmeyx yComputer Science Department, xElectrical Engineering Department University of Southern California, Los Angeles, CA 90089, USA {ananddjo,mizhang,kadmawal,dantu,sameera,gaurav}@usc.edu Abstract. As demand grows for mobile phone applications, research in optical character recognition, a technology well developed for scanned documents, is shifting focus to the recognition of text embedded in digital photographs. In this paper, we present OCRdroid, a generic framework for developing OCR-based ap- plications on mobile phones. OCRdroid combines a light-weight image prepro- cessing suite installed inside the mobile phone and an OCR engine connected to a backend server. We demonstrate the power and functionality of this framework by implementing two applications called PocketPal and PocketReader based on OCRdroid on HTC Android G1 mobile phone. Initial evaluations of these pilot experiments demonstrate the potential of using OCRdroid framework for real- world OCR-based mobile applications. 1 Introduction Optical character recognition (OCR) is a powerful tool for bringing information from our analog lives into the increasingly digital world. This technology has long seen use in building digital libraries, recognizing text from natural scenes, understanding hand-written office forms, and etc. By applying OCR technologies, scanned or camera- captured documents are converted into machine editable soft copies that can be edited, searched, reproduced and transported with ease [15]. Our interest is in enabling OCR on mobile phones. Mobile phones are one of the most commonly used electronic devices today. Commodity mobile phones with powerful microprocessors (above 500MHz), high- resolution cameras (above 2megapixels), and a variety of embedded sensors (ac- celerometers, compass, GPS) are widely deployed and becoming ubiquitous.
    [Show full text]
  • Yelieseva S. V. Information Technologies in Translation.Pdf
    The Ministry of Education and Science of Ukraine Petro Mohyla Black Sea National University S. V. Yelisieieva INFORMATION TECHNOLOGIES IN TRANSLATION A STUDY GUIDE PMBSNU Publishing House Mykolaiv – 2018 S. V. Yelisieieva UDC 81' 322.2 Y 40 Рекомендовано до друку вченою радою Чорноморського національного університету імені Петра Могили (протокол № 12 від 03.07.2018). Рецензенти: Демецька В. В., д-р філол. наук, професор, декан факультету перекладознавства Херсонського державного університету; Філіпова Н. М., д-р філол. наук, професор кафедри прикладної лінгвістики Національного морського університету ім. адмірала Макарова; Кондратенко Ю. П., д-р техн. наук, професор, професор кафедри інтелектуальних інформаційних систем факультету коп’ютерних наук Чорноморського національного університету ім. Петра Могили; Бабій Ю. Б., канд. філол. наук, доцент кафедри прикла- дної лінгвістики Миколаївського національного університету ім. В. О. Сухомлинського. Y 40 Yelisieieva S. V. Information Technologies in Translation : A Study Guide / S. V. Yelisieieva. – Мykolaiv : PMBSNU Publishing House, 2018. – 176 р. ISBN 978-966-336-399-8 The basis of the training course Information Technologies in Translation lies the model of work cycle on translation, which describes the sequence of necessary actions for qualified performance and further maintenance of written translation order. At each stage, translators use the appropriate software (S/W), which simplifies the work and allows to improve the quality of the finished documentation. The course familiarizes students with all stages of the translation work cycle and with those components of the software used at each stage. Besides, this training course includes information on working with the latest multimedia technologies that are widely used nowadays for presentations of various information materials.
    [Show full text]