Efficient Data Searching and Retrieval Using Block Level Deduplication

Total Page:16

File Type:pdf, Size:1020Kb

Efficient Data Searching and Retrieval Using Block Level Deduplication EFFICIENT DATA SEARCHING AND RETRIEVAL USING BLOCK LEVEL DEDUPLICATION 1SUPRIYA MORE, 2KAILAS DEVADKAR 1Department of Computer Engineering, Sardar Patel Institute of Technology, Mumbai, India. 2Deputy Head of Department, Sardar Patel Institute of Technology, Mumbai, India. E-mail: [email protected], [email protected] Abstract - As we are living in the era of digital world there is tremendous growth in data amount. The increased data growth rate imposed lots of challenges regarding storage system. Deduplication is the emerging technology which not only reduces the storage space but also eliminates the redundant data. This paper proposed an efficient deduplication method for distributed system which uses block level chunking for elimination of redundant data. In traditional deduplication system there is only one global index table for storing unique hash values, but in our system there will be different index table for each of the distributed data-server in which reduces the searching and retrieval time. Here each incoming block will be checked for duplicate in each of the index table parallely.The comparative results with traditional deduplication system shows that our system has improved the processing time and storage utilization. Keywords - Data Deduplication, Index Management, Distributed System, Storage. I. INTRODUCTION block level deduplication technique in our proposed system. Nowadays due to increase in the data growth rate Data searching and retrieval is one of the important there is huge pressure on storage system. It is very operations which can affect the overall deduplication important to effectively use the storage system in system. The overall performance of system is order to store large amount of data in minimum depends upon the searching time required to find storage space. According to research it has been matching index.[1]It is very challenging to build a proved that almost fifty percent of data is in duplicate reliable index management for cluster deduplication form[4],so the question is why to waste our storage since the block index entries increases as the data memory just to store duplicate data. Today storage is amount increased. In traditional deduplication very expensive need not only for enterprise technique, for storing unique hash values i.e. unique organization but also for basic home users. Due to the fingerprints one global index table is maintained. different new technologies like Internet of things, Frequent queries burdens the single index table, it Cloud computing millions of data is being generated may put overhead on it’s a result it affects the overall over a network every second. Most of the data is searching process. And another problem is, complete dynamic in nature i.e. it keeps on changing or data block needs to be transferred at data server even modified by user repeatedly. Deduplication is a if it is duplicate and then deduplication performed, solution for such problem. It is a technique which which increases the network bandwidth overhead. effectively eliminates the duplicate data and stores In this paper we proposed a system which obtains the only the unique i.e. original data. parallel deduplication with improved processing time. There are different methods of deduplication which It contains one metadata server which has different depends upon Chunking type, ie.file level chunking index table for each of the data server in cluster and block level chunking [5].When user uploads any where all the data is being backed up in distributed file for backup, the first stage is to generate hash manner respectively. When new data block arrives for value of that particular file, next the generated hash backup it will be searched in each index table will be compared with already stored hash values in parallely. Here deduplication technique is splitted i.e. index table .If match found that means same data is hashing, blocking will be done at application server already exist in storage hence it will discard that data level, matching will be done at metadata server level. and only gives the reference pointer to the matched Only unique blocks will be transferred to the data data. In this way it will eliminate the duplicate data. servers for storage which reduces the network In file level entire data file will be considered as bandwidth. Blocks will be transferred depending whole one chunk hence for each file only one hash is upon the hash entries in each of the index table in being generated. But in Block level chunking each metadata server, data block will be stored at a data file will be divided into fixed size blocks and hash node which has less number of entries in index table value will be generated for each block. There can be a such a way it achieves load balancing. case where one file contains redundant data within In rest of the paper section 2 describes related work itself in such scenario, file level deduplication will done on deduplication system, section 3 gives fail to eliminate such duplication, but block level can methodology and system architecture of proposed easily eliminate such duplicate data. We will be using system. Section 4 covers evaluation and results using Proceedings of WRFER International Conference, 24th June, 2018, New Delhi, India 14 Efficient Data Searching and Retrieval using Block Level Deduplication different test cases; finally section 5 concluded the methods. Chunk level and file level deduplication paper. have been analyzed. AmanpreetKaur and Sonia Sharma[10] analyses the information about II. RELATED WORK deduplication or the cloud based systems. they include the methods that are used to achieve cost Q. Liu et al.[1] derives the the new effective storage and effective bandwidth usage by Halodedudeduplication system which is more deduplication. scalable hadoop based deduplication method. They performed parallel deduplication using Map reduce III. PROPOSED METHODOLOGY and HDFS.In each data node there will be separate local database to store hash identifiers which properly Data searching and retrieval is one of the important manages the index table performance. This method operations which can affect the overall deduplication increases the speed of fingerprint searching system. The overall performance of system is effectively. In Map reduce they have only used Map depends upon the searching time required to find stage to decrease the processing time of the system. matching index[1].In our proposed system to At Metadata Server they have used HBase. eliminate the issue of global index table we are N Kumar et al.[2] introduced bucket based creating index table for each node which in turns deduplication method to achieve reliable data results in parallel deduplication and faster data deduplication. In this Input data file is divided into retrieval operations. fixed size chunks. Then MD5 algorithm is used to Depending on the survey and the issues identified, we generate unique hash identifier for each chunk. These have proposed efficient deduplication strategy as hash values are compared with the hash values stored follows: in bucket to check whether block is duplicate or not 1) Client register, login and upload input file which is to ,for this they have used map reduce technique. If be backup. match found it will be considered as duplicate and 2) Chunking phase -Input file is divided into fixed can be discarded. length chunks. J. Yong et al. [3] proposed a cloud storage system 3) After chunking, using hash algorithm MD5, Hash value is calculated for each data block. This is then Dedu.They have used HDFS and HBase for searched in metadata server parallel according to data deduplication system, HBase is used for faster search nodes in order to check whether this data block exist in efficiency.Dedu is a Data deduplication system which any data node. is used for effective management of duplicate data. 4) If match found only reference pointer will be sent to For this they have used deduplication as a frontend the respective data node's index table, otherwise whole application and second major component is mass data block with its hash identifier will be sent to the storage system in cloud as a backend. They used data node in encrypted format. HDFS for mass storage system .VMware is used for 5) On data node side, it will decrypt the data block, store cloud simulation. hash identifier in index table and store the data block. It sends metadata regularly to the metadata server. A. Venish and K. Siva Sanka [4] and R. Vikraman [5] have included different algorithms for the data deduplication and performed comparison different chunking algorithm.Manogar [6] examined and compared different data deduplication methods and concluded that variable size data de-duplication is better from other deduplication techniques. R-S Chang et al.[7] proposed deduplication decision system. In this there are two thresholds; it splits the data as cold data and hot data depending on low access frequency and high access frequency. They propose a dynamic deduplication decision to improvise the storage utilization of data nodes which uses HDFS as file system. Proposed system can be seen as a proper deduplication strategy to efficient storage utilization under the limited storage requirement. Yunfeng Zhu [8] In distributed deduplication storage system, They have examined the load balance problem by balancing the load on data effective and reliable deduplication can be achieved. Due to chunking deduplication process significantly slows down which affects the performance of retrieval operation.Bhaskar et al.[9] focuses on deduplication Figure 1: Flow diagram of proposed methodology Proceedings of WRFER International Conference, 24th June, 2018, New Delhi, India 15 Efficient Data Searching and Retrieval using Block Level Deduplication ALGORITHM respective hash identifier from application server and A. Deduplication (file f) stores it. Meta-data for each block is transferred B. Split the file f into fixed size blocks regularly to Metadata server.
Recommended publications
  • Documentation for Licenses
    Documentation for Licenses Table of Contents Page 2 of 41 Table of Contents Table of Contents 2 Licenses 5 EULA - OpenKM End User License Agreement 7 0. DEFINITIONS 7 1. SCOPE OF AGREEMENT 7 1.1 Software and Support 7 1.2. Business Partners. 8 2. REPORTING AND RECORDS 8 2.1 Reporting 8 2.2 Records Retention 8 3. LICENSE AND OWNERSHIP 8 3.1 Grant to User 8 3.2 Restrictions 9 3.3 Proprietary Rights 9 3.4 Company's responsibility for End Users 10 3.5 Fees and Payment 10 3.6 License fee 10 4. TERM AND TERMINATION 10 4.1 Term and Termination of Agreement 10 4.2 Survival 10 5. CONFIDENTIALITY 10 5.1 Confidential Information 11 6. REPRESENTATIONS AND WARRANTIES 11 6.1 General Representations and Warranties. 11 6.2 Disclaimer of Warranty 11 6.3 INFRINGEMENT 12 7. LIMITATION OF LIABILITY AND DISCLAIMER OF DAMAGES 12 7.1 Disclaimer of Damages 12 7.2 Limitation of Liability 12 7.3 Disclarimer of any warranty 13 8. INDEMNIFICATION 13 8.1 Defense 13 8.2 Injunctive Relief 14 8.3 Exclusions 14 9. GENERAL 14 9.1 Notices 14 9.2 Compliance with Applicable Laws 15 9.3 Entire Agreement. 15 9.4 Force Majeure 15 9.5 Severability/Waiver 15 9.6 Dispute Resolution. 15 9.7 Headings 15 9.8 Amendment. 15 10 COPYRIGHT 16 11 MERGER OR INTEGRATION 16 12 TRANSFER OF LICENSE 16 13 LIMITATIONS ON USING, COPYING, AND MODIFYING THE SOFTWARE 16 14 DECOMPILING, DISASSEMBLING, OR REVERSE ENGINEERING 16 15 SOFTWARE MAINTENANCE 17 16 PUBLICITY RIGHTS 17 17.
    [Show full text]
  • Bid Document Where Required
    United Nations Development Programme REQUEST FOR PROPOSAL Provision of Police Management Information System (PIMS) RFP No.: BBRFP91871 Project: Strengthening Evidenced Based Decision Making for Citizen Security in the Caribbean (CARISECURE) Country: Barbados Issued on: 6 February 2020 Contents SECTION 1. LETTER OF INVITATION ................................................................................................................. 4 SECTION 2. INSTRUCTION TO BIDDERS ........................................................................................................... 5 A. GENERAL PROVISIONS ................................................................................................................................................. 5 1. Introduction ........................................................................................................................................................ 5 2. Fraud & Corruption, Gifts and Hospitality......................................................................................................... 5 3. Eligibility ............................................................................................................................................................. 5 4. Conflict of Interests ............................................................................................................................................ 6 B. PREPARATION OF PROPOSALS ...................................................................................................................................
    [Show full text]
  • Grzegorz Bernaś Blog
    Grzegorz Berna ś - 30 years old; - 12 years of programming experience; - 7 years of java experience; Address: Pruszków, Polska e-mail : grzegorz [.AT.] bernas.com.pl web : http://www.bernas.com.pl GitHub: https://github.com/kirkor Stack Overflow: http://stackoverflow.com/users/3801331/kirkor marital status : married, 2 kids Education: • 2006 – 2012 – Faculty: Computer Science, Software Engineering (graduation). • 2004 – 2006 – Faculty: Physics, Mathematics and Computer Science, Computer Science. • 2000 – 2004 – I Highschool im. Stanisława Konarskiego w O świ ęcimiu. Professional experience: • Since 10.2014 Mettler-Toledo International, Inc. e-Commerce | Hybris Java developer ◦ Design and implement java backend; ◦ Interfaces design; ◦ Keeper of clean code and good practice; ◦ Code reviews; ◦ Supervision a team of 20 people in Pune (India); ◦ Supporting and coaching offshore team in subjects of JUnit, Integration test, code reviews, code best practices; ◦ Releases strategy; ◦ Stash and GIT administration; ◦ Setup of the development infrastructure (Stash, Jenkins for CI and automatic deployment) ◦ Interviewing new joiners. Technology stack: ◦ Java, Spring core, Spring security, Spring integration, JRebel, JMeter, Robot framework; ◦ JSP, HTML, CSS, JavaScript, JQuery; ◦ Hybris 5.1: Impex, CronJob, Flexible Search, Data Model, Solr, Ant. Dev tools: ◦ Eclipse, JRebel, Git Extenstion, REST client, Stash, Jira, Confluence, Jenkins, MsSQL. • Since 12.2008 – Beriko Software . CEO ◦ Project Management (SCRUM); ◦ Object-oriented programming / design
    [Show full text]
  • Angelos Anagnostopoulos
    Angelos Anagnostopoulos Date of birth: 20/04/1976 Nationality: Greek Gender: Male (+30) 6944376165 [email protected] https://anagnostic.org Terzopoulou 5, 15342, Athens, Greece About me: I've accumulated more than a decade and a half of real-world experience in various Java SE/EE and Javascript frameworks. I've also worked with most major database platforms, as well as GIS applications. Often juggling multiple roles at the same time, I've worked on and off site, mostly as a contractor, with the last few years having been exclusively remotely from my home office, in Athens, GR. WORK EXPERIENCE 28/05/2018 – CURRENT – Dublin, Ireland SOFTWARE ENGINEER – CurrencyFair Working remotely from Athens, GR, with occasional visits to the Dublin company HQ. End-to-end implementation of existing/new product features, aimed at B2C and B2B. Technologies: Oracle, Spring Data JPA/Core, React.js, PHP, AWS, Vagrant, Chef, Jenkins. Lessons learned: Working in coordination with a 10+ member team, on a large scale product requiring 24/7 availability. Engineering Financial and insurance activities https://www.currencyfair.com/ 01/09/2017 – 25/05/2018 – Athens, Greece CTO – Public Soft Working remotely from Athens, GR, with occasional on-site meetings. Technical coordination for the implementation of a web based application, responsible for handling the public sector's procurement work-flow. Technologies: MySQL, Spring Data JPA-REST-Boot, Vue.js, OpenAPI, SwaggerHub, Docker, Jenkins Lessons learned: Technical team coordination and guidance. Public administration and defence; compulsory social security www.publicsoft.gr 01/09/2017 – 30/10/2017 – Athens, Greece SOFTWARE ENGINEER – Sastix Working remotely, from Athens GR.
    [Show full text]
  • Universidad De Guayaquil Facultad De Ingenieria Industrial Departamento Académico De Titulación
    UNIVERSIDAD DE GUAYAQUIL FACULTAD DE INGENIERIA INDUSTRIAL DEPARTAMENTO ACADÉMICO DE TITULACIÓN TRABAJO DE TITULACIÓN PREVIO A LA OBTENCION DEL TÍTULO DE LICENCIADA EN SISTEMAS DE INFORMACIÓN. ÁREA MODELAMIENTO DE PROCESOS TEMA “MODELAMIENTO DE PROCESO DE GESTIÓN DOCUMENTAL A INSTITUCIONES EDUCATIVAS DE NIVEL MEDIO (COLEGIOS) UTILIZANDO EL SOFTWARE OPENKM”. AUTORA ESPINOZA CORTEZ KARLA DENNISE DIRECTOR DEL TRABAJO ING.CIV. CARVACHE FRANCO ORLY DANIEL, MSC. 2018 GUAYAQUIL – ECUADOR ii DECLARACION DE AUTORÍA “La responsabilidad del contenido de este trabajo de titulación, me corresponde exclusivamente y el patrimonio intelectual del mismo a la Facultad de Ingeniería Industrial de la Universidad de Guayaquil” Espinoza Cortez Karla Dennise C.C. 0940414865 iii DEDICATORIA Gracias a Dios este trabajo va dedicado a mi mamá quien fue la persona que estuvo conmigo desde el inicio y final de la carrera dándome ánimo y aconsejándome siempre que todo con perseverancia se puede lograr y creyó en mí siempre, y a los maestros y compañeros que me brindaban sus conocimientos en este proyecto. iv AGRADECIMIENTO Mi agradecimiento principal es para Dios sin el este logro no lo hubiera realizado, a mi mamá Margarita Cortez, hermanas que siempre me apoyaron dándome ese impulso de que siga adelante a mi enamorado Gilson Mendoza por estar conmigo en toda esta etapa de mi vida logro concluir esta esperada y anhelada meta de obtener mi título. v ÍNDICE GENERAL N° Descripción Pág. PRÓLOGO 1 INTRODUCCIÓN 3 CAPÍTULO I MARCO TEÓRICO N° Descripción Pág. 1.1 ¿Qué es
    [Show full text]
  • Címkézett Dokumentum-Nyilvántartás Felhasználóbarát Kezelése És Alkalmazása
    MISKOLCI EGYETEM GÉPÉSZMÉRNÖKI ÉS INFORMATIKAI KAR TUDOMÁNYOS DIÁKKÖRI DOLGOZAT Címkézett dokumentum-nyilvántartás felhasználóbarát kezelése és alkalmazása Piller Imre Mérnök informatikus MSc, II. évfolyam Konzulensek: Dr. Fegyverneki Sándor Dr. Kovács László egyetemi docens, tanszékvezető egyetemi docens, tanszékvezető Alkalmazott Matematikai Tanszék Általános Informatikai Tanszék Miskolc, 2012 Tartalomjegyzék 1. Bevezetés 1 1.1. A hierarchikus tárolás problémái .....................2 1.2. Címkézés alapú megközelítés ........................3 2. Címkézést használó alkalmazások 4 2.1. Fájlkezelő programok ............................4 2.1.1. Elyse .................................5 2.1.2. Windows Future Storage ......................5 2.1.3. Nepomuk - The Social Semantic Desktop .............6 2.1.4. Tracker ...............................7 2.1.5. tag2find ...............................7 2.1.6. TaggedFrog .............................8 2.1.7. OpenKM ..............................8 2.1.8. Tabbles, folders evolved ......................9 2.2. A meglévő rendszerek értékelése ......................9 3. Az új rendszer alapelvei 13 3.1. A modell formális leírása . 13 3.2. A kontextus felépítése ........................... 16 3.3. Kapcsolat a hierarchiával .......................... 17 3.4. Az átalakítás gyakorlati kérdései ...................... 19 4. A felhasználói felület 21 4.1. A felület elemei és használatuk ...................... 21 4.2. Többszörös kijelölés ............................. 23 4.3. Fájlműveletek ...............................
    [Show full text]
  • Document Template
    SPMT SPMT Conceptual Design Pre PDR Project Code: TRP/TELE/001-R Issue: 1.C Date: 24/06/2014 No. of pages: 153 INSTITUTO DE ASTRONOMIA – UNIVERSIDAD NACIONAL AUTONOMA DE MEXICO Apartado Postal 70-264 Cd. Universitaria 04510 MEXICO D.F. – Phone: 525556223907 – Fax: 525556160653 URL : http ://www.astro.unam.mx E-mail : [email protected] SPMT Code: TRP/TELE/001-R Issue: 1.C SPMT Conceptual Design Pre PDR Date: 24/06/2014 Page: 2 of 152 Approval control Prepared by Jorge Uribe Diana Lucero Alberto Rodríguez Berenice Rodríguez Rogelio Manuel César Martínez CIDESI Approved by Alan Watson Beatriz Sánchez Jesús González IAUNAM Authorized by William Lee IAUNAM Date: 30-06-2014 SPMT Code: TRP/TELE/001-R Issue: 1.C SPMT Conceptual Design Pre PDR Date: 24/06/2014 Page: 3 of 152 Applicable documents Nº Document title Code Issue A.1 SPMT Specifications GEN/SYEN/0004-F Draft A.2 SPMT Alternatives Evaluation and Budget GEN/SYEN/0005-F Draft A.3 MMT Analysis GEN/SYEN/0002-F 1.A A.4 Magellan Analysis GEN/SYEN/0003-F 1.A A.5 Site and Geotechnical Study for the SPMT GEN/SYEN/0001-F 1.A A.6 Conceptual Design GEN/SYEN/0006-F 1.A A.7 Mass Budget,Center of Mass and Moments of Intertia of TEN/TELE/001-F 1.B SPMT Telescope A.8 SPMT High Level Requierements SPMT/HLREQ-001 1.D Reference documents Nº Document title Code Issue R.1 SPMT Code: TRP/TELE/001-R Issue: 1.C SPMT Conceptual Design Pre PDR Date: 24/06/2014 Page: 4 of 152 List of acronyms and abbreviations MMT Multi Mirror Telescope SPMT San Pedro Mártir Telescope SPM San Pedro Mártir OAN Observatorio Astronómico Nacional IA-UNAM Instituto de Astronomía – Universidad Nacional Autónoma de México CIDESI Centro de Ingeniería y Desarrollo Industrial SPMT Code: TRP/TELE/001-R Issue: 1.C SPMT Conceptual Design Pre PDR Date: 24/06/2014 Page: 5 of 152 CONTENTS 1.
    [Show full text]
  • Downloads O License: GNU Gplv3 O Has Proprietary Features: Yes
    1.0 ﺑﺮﻧﺎﻣﺞ اﻟﺘﻌﺎﻣﻼت اﻟﻜﺘﺮوﻧﻴﺔ اﻟﺤﻜﻮﻣﻴﺔ ّ(ﻳﺴﺮ) 2019 ................................................................................................................................................. ................................................................................................................................ .............................................................................................................................................. ................................................................................................................................... ......................................................................................................................... CMS ..................................................................................................................... DMS ................................................................................................................ .................................................................................................................................... ........................ ...................................................................................................... ............................................................................................. ................................................................................................................. ........................................................................................................................... ......................................................................................................................................
    [Show full text]
  • Sharing and Managing Information
    Compartir y gestionar información Presentado por: Índice 1 ¿Qué es OpenKM? .......................................................................................................................................................... 4 Recopilar: información 5 Colaborar: compartir y trabajar juntos en proyectos y grupos de trabajo 5 Capitalizar: Convertir el conocimiento en acción 6 Beneficios 6 Algunos datos de OpenKM 7 2 Funcionalidades ............................................................................................................................................................. 10 2.1 Recopilar 10 2.2 Colaborar 11 2.3 Capitalizar 12 C/ Guillem Galmes 9 Bajos – Palma de Mallorca – Islas Baleares – España – 07004 – +34 605 074 544 – www.openkm.com p. 3 1 ¿Qué es OpenKM? Las organizaciones producen gran cantidad A diferencia de otras soluciones, el enfoque de de documentos, imágenes y otros tipo de bottom-up de OpenKM permite que las actividades en torno al contenido se utilicen para conectar a las información en formato digital. La búsqueda y personas a otras personas, la información a la localización de esta información implica una información, y las personas a la información ardua tarea que puede llegar a consumir consiguiendo una gestión más eficiente, la mucho tiempo. Además, los usuarios suelen inteligencia colectiva de la organización. guardar los documentos en carpetas en sus OpenKM es un repositorio de gran valor de los activos propios ordenadores. Nadie sabe qué de información corporativa que facilita la creación de
    [Show full text]
  • Angelos Anagnostopoulos
    Angelos Anagnostopoulos Date of birth: 20/04/1976 Nationality: Greek Gender Male (+30) 6944376165 [email protected] https://anagnostic.org Terzopoulou 5, 15342, Athens, Greece About me: I've accumulated more than a decade and a half of real-world experience in various Java SE/EE and Javascript frameworks. I've also worked with most major database platforms, as well as GIS applications. Often juggling multiple roles at the same time, I've worked on and off site, mostly as a contractor, with the last few years having been exclusively remotely from my home office, in Athens, GR. WORK EXPERIENCE 01/03/2021 – CURRENT – Athens, Greece TECHNOLOGY EXPERT – EUROPEAN COMMISSION Team Java 1 architect, Directorate General for Energy and Transport (DG ENER/MOVE). ◦ Working remotely from Athens, GR ◦ Part of a 6 member agile team. ◦ Existing systems/services integration and enhancement, plus new projects analysis and implementation. Tehcnologies: Oracle, JPA/Hibernate, EJB 3, JAX-RS, JAX-WS, Apache CXF, Spring Boot, WebLogic. Directorate General of Energy and Transport Public administration and defence; compulsory social security 28/05/2018 – 26/02/2021 – Dublin, Ireland SOFTWARE ENGINEER – CURRENCYFAIR Company product integration with external banking partners (typically one per operating country/region), as well as introduction of new currencies on the platform. ◦ Working remotely from Athens, GR, with occasional visits to the Dublin company HQ. ◦ Part of a 10 member agile team. ◦ Implement back-end (Spring based) connectors & micro-services, to ingest and export transaction data from/to external banking partners. ◦ Enable new currencies "full-stack" (i.e. starting from the database and all the way up to the UI).
    [Show full text]
  • Redesign and Improvement of Knowledge Management Software
    Final Degree Project Redesign and Improvement of Knowledge Management Software Studies: Telecommunication Engineering Author: Diana Cant´oEstany Tutor: Professor LIU Fuqiang Supervisor: Professor Marcel Fernandez Date: November 2013 Contents 1 Introduction 8 2 Objectives 10 3 OpenKM 12 3.1 Developer Guide . 13 3.2 OpenKM source code . 14 3.2.1 Admin pages . 15 3.2.2 Fronted . 16 3.2.3 Login pages . 17 3.2.4 Web-Inf . 17 4 New Design and Tools 18 4.1 Design . 18 4.1.1 Colour . 19 4.1.2 Interactivity . 20 4.1.3 Size . 20 4.1.4 Shape . 20 4.2 Tools . 21 4.2.1 Usability . 21 4.2.2 Select all . 22 4.2.3 Sort . 23 4.2.4 Pop-up menu . 25 5 Workflow 29 1 Redesign and Improvement of KM Software 5.1 Basic Workflow elements . 30 5.1.1 Nodes . 30 5.1.2 Transactions . 31 5.1.3 Actions . 31 5.2 Forms . 31 5.2.1 Form elements . 33 5.3 Tasks . 34 5.4 Mail . 36 5.5 Join and Fork . 36 5.6 Advanced simulation . 37 6 Task management desktop 39 6.1 Novelties . 41 6.1.1 Structure . 42 6.1.2 Not workflow related tasks . 43 6.1.3 Subtasks . 44 6.1.4 Other details . 44 7 Conclusions 45 8 Future Work 47 A OpenKM Specifications 48 B Courses 50 C Platform final design 51 D CSS files 53 E JSP files 83 F Invoice Example - Figures 89 G Modified Files 92 2 List of Figures 4.1 Old colour design .
    [Show full text]
  • Comparación De Gestión De Documentación Para Las Conservaciones Del Capital Intelectual En Una Empresa PYME
    Revista de la Facultad de Ingenierías y Tecnologías de la Información y Comunicación. Año 3, Volumen 1, Número 5, Enero – Junio 2019 Comparación de Gestión de Documentación para las conservaciones del capital intelectual en una empresa PYME. Comparison of Documentation Management for the conservation of intellectual capital in a SME company Katty Elizabeth Bolaños Rodriguez [email protected] Jorge Arturo Castillo Matarrita [email protected] Recibido 7/set/2018 Aprobado 21/dic/2018 Resumen — El trabajo surgió por la enfoque en sistemas de código abierto, de tal necesidad de identificar el proceso de manera que en una posible implementación adecuado de como poder documentar y la herramienta no represente una alta retener el capital intelectual que se define inversión. De igual manera, se mostraron como un conjunto de activos intangibles los pasos seguidos para emitir el criterio basado en conocimientos obtenidos a raíz de final y aunque solamente una herramienta la experiencia empresarial y el conocimiento se eligió producto de este proceso, existe un tácito y explícito de las personas que apartado para cada una de ellas con componen una empresa. Es posible información pertinente que amplía sus encontrar autores que respaldan la características, tales como arquitectura y importancia de tal activo, resaltando la las tecnologías en las cuales se basan. necesidad de conservarlo e implementarlo como parte del desarrollo diario de una Palabras Claves: Buenas Prácticas, IP, organización. Algunas investigaciones Capital Intelectual, herramientas, destacan la posibilidad de apoyarse en LogicalDOC, Arquitectura, OpenKM. herramientas tecnológicas como medio de conservación segura y efectiva del capital Abstract - The work arose from the need to intelectual e, incluso, fomentan dicho uso; identify the process of adequate to document sin embargo, no es posible encontrar and retain the intellectual capital that is evidencia suficiente dentro de estas.
    [Show full text]