Adaptive Plane Projection for Video-Based Point Cloud Coding

Total Page:16

File Type:pdf, Size:1020Kb

Adaptive Plane Projection for Video-Based Point Cloud Coding Adaptive Plane Projection for Video-based Point Cloud Coding Eurico Manuel Rodrigues Lopes Thesis to obtain the Master of Science Degree in Electrical and Computer Engineering Supervisors: Prof. Dr. Fernando Manuel Bernardo Pereira Prof. Dr. João Miguel Duarte Ascenso Examination Committee Chairperson: Prof. Dr. José Eduardo Charters Ribeiro da Cunha Sanguino Supervisor: Prof. Dr. João Miguel Duarte Ascenso Member of the Committee: Prof. Dr. Nuno Miguel Morais Rodrigues November 2018 ii Declaration I declare that this document is an original work of my own authorship and that it fulfills all the requirements of the Code of Conduct and Good Practices of the Universidade de Lisboa. iii iv Acknowledgments I would like to thank all my family and friends that have supported me throughout this degree with a special mention to my parents, for all the financial and emotional support they have provided me during these 5 years. I’d also like to thank my girlfriend for her love, and for her support and motivation throughout the most difficult challenges we have both surpassed together. I would also like to thank my supervisors for their guidance, orientation and lessons that I take for life: Prof. Fernando Pereira, Prof. João Ascenso and Prof. Catarina Brites. Lastly, I’d like to thank to my colleagues at Instituto de Telecomunicações, namely Afonso Costa and Antoine Dricot, for the many discussions we have held, and for the help they have given me during the development of the Thesis. Without these people, achieving this degree would never be possible. v vi Resumo A recente evolução dos sistemas de aquisição 3D e dos métodos de processamento deste tipo de informação levaram ao desenvolvimento de novos modelos de representação 3D, nomeadamente point clouds, holographic imaging e light fields. Uma point cloud é um conjunto de pontos com uma localização 3D associada, e opcionalmente alguns atributos, tais como cor, refletência e normais. Este modelo de representação viabiliza um novo conjunto de aplicações, desde a telepresença 3D a mapas 3D de larga escala. Contudo, a elevada imersão e o realismo acarretam consigo altos débitos binários, e por isso, soluções de codificação eficientes são essenciais. Como consequência, o MPEG lançou um projeto de normalização de soluções de codificação eficiente de point clouds. Este projeto deu origem a duas soluções de codificação: o Video-based Point Cloud Coding (V-PCC) para conteúdo dinâmico e o Geometry-based Point Cloud Coding (G-PCC) para conteúdo estático. O objetivo desta Dissertação é o desenho, implementação e avaliação de uma solução de codificação que seguindo a estrutura do MPEG V-PCC seja capaz de melhorar o seu desempenho na codificação de point clouds estáticas (modo intra). Assim, este trabalho começa por abordar o estado- da-arte na área das point clouds, com especial foco nos conceitos mais relevantes, bem como nas soluções de codificação mais relevantes na literatura, incluindo as duas soluções de codificação do MPEG. A solução de codificação proposta segue a estrutura do MPEG V-PCC mas inova ao adaptar os planos de projeção às características específicas de cada point cloud. A análise de resultados demonstra um melhor desempenho de geometria relativamente ao MPEG V-PCC, especialmente para médias e altas qualidades. Palavras-chave: Modelos de representação visual, sistemas de aquisição 3D, point clouds , soluções de codificação, point clouds estáticas. vii viii Abstract In recent years, the evolution of 3D capture systems and processing methods have unleashed the breakthrough of new 3D representation models, namely point clouds, holographic imaging and light fields. A point cloud is a set of points with an associated 3D location and, optionally, some attributes, such as colour, reflectance, and normals. This visual representation model enables a more complete, realistic and immersive representation of the visual world, which allows a whole new set of applications, from 3D telepresence to large-scale 3D maps. However, the immersion and realism of the point cloud representation model comes at the cost of a very high bitrate, and thus efficient coding solutions are essential. As a consequence, MPEG has launched a standardization project on efficient point cloud coding solutions. This project has given rise to two coding solutions, notably Video-based Point Cloud Coding (V-PCC) to encode dynamic content and Geometry-based Point Cloud Coding (G-PCC) to encode static content. This Thesis targets the design, implementation and assessment of a static point cloud coding solution following the MPEG V-PCC framework with increased compression performance. To achieve this ultimate goal, this Thesis starts by addressing the state-of-art on point clouds, with a special focus on the main relevant concepts and the most relevant coding solutions in the literature, including the two MPEG coding solutions. The coding solution follows MPEG V-PCC and innovates by adapting the projection planes to the specific characteristics of each point cloud. The assessment results show a solid improvement on the geometry performance with respect to the MPEG V-PCC, especially for medium to high rates. Keywords: Visual representation models, 3D acquisition systems, point clouds, coding solutions, static point clouds. ix x Table of Contents 1. Introduction .............................................................................................................................. 1 1.1. Context and Motivation .......................................................................................................... 1 1.2. Objectives and Structure of the Thesis ................................................................................. 2 2. Point Cloud Coding: Basics and Main Coding Solutions ......................................................... 4 2.1. Main Concepts ....................................................................................................................... 4 2.2. End-to-End Point Cloud Processing Architecture ................................................................. 7 2.3. Point Cloud Use Cases ....................................................................................................... 14 2.4. Point Cloud Quality Assessment ......................................................................................... 17 2.5. Main Point Cloud Coding Solutions ..................................................................................... 21 2.5.1. Point Cloud Library Solution ............................................................................................. 21 2.5.2. JPEG-Based Point Cloud Colour Coding ......................................................................... 22 2.5.3. Graph-based Transform Coding Solution ......................................................................... 26 2.5.4. Graph-based Compression of Dynamic Point Clouds ...................................................... 30 2.5.5. Dynamic Point Cloud Compression with Coding-Mode Decision ..................................... 33 2.6. MPEG Geometry-based Point Cloud Coding ...................................................................... 37 3. MPEG Video-based Point Cloud Coding ............................................................................... 43 3.1. Architecture and Walkthrough ............................................................................................. 43 3.2. Patch Generation Process................................................................................................... 46 4. Proposing Adaptive Plane Projections for V-PCC ................................................................. 53 4.1. Objectives, Benefits and Potential Complications ............................................................... 54 4.2. Architecture and Walkthrough ............................................................................................. 56 4.3. Main Tools Description ........................................................................................................ 58 4.3.1. K-Means Clustering .......................................................................................................... 59 4.3.2. Adaptive Plane Selection and Local Coordinate System Transformation ........................ 61 4.3.3. Adaptive Resolution Control ............................................................................................. 66 4.4. Additional Tools ................................................................................................................... 67 4.4.1. Recolouring Pre and Post-Processing .............................................................................. 68 4.4.2. Duplicates Removal .......................................................................................................... 69 5. AP2V-PCC Performance Assessment ................................................................................... 72 xi 5.1. Test Material ........................................................................................................................ 72 5.2. Test Conditions .................................................................................................................... 73 5.3. Objective Evaluation Metrics ............................................................................................... 74 5.4. AP2V-PCC Parameters Optimization .................................................................................. 75 5.4.1. Number of Clusters (K) Optimization
Recommended publications
  • Surface Recovery: Fusion of Image and Point Cloud
    Surface Recovery: Fusion of Image and Point Cloud Siavash Hosseinyalamdary Alper Yilmaz The Ohio State University 2070 Neil avenue, Columbus, Ohio, USA 43210 pcvlab.engineering.osu.edu Abstract construct surfaces. Implicit (or volumetric) representation of a surface divides the three dimensional Euclidean space The point cloud of the laser scanner is a rich source of to voxels and the value of each voxel is defined based on an information for high level tasks in computer vision such as indicator function which describes the distance of the voxel traffic understanding. However, cost-effective laser scan- to the surface. The value of every voxel inside the surface ners provide noisy and low resolution point cloud and they has negative sign, the value of the voxels outside the surface are prone to systematic errors. In this paper, we propose is positive and the surface is represented as zero crossing two surface recovery approaches based on geometry and values of the indicator function. Unfortunately, this repre- brightness of the surface. The proposed approaches are sentation is not applicable to open surfaces and some mod- tested in realistic outdoor scenarios and the results show ifications should be applied to reconstruct open surfaces. that both approaches have superior performance over the- The least squares and partial differential equations (PDE) state-of-art methods. based approaches have also been developed to implicitly re- construct surfaces. The moving least squares(MLS) [22, 1] and Poisson surface reconstruction [16], has been used in 1. Introduction this paper for comparison, are particularly popular. Lim and Haron review different surface reconstruction techniques in Point cloud is a valuable source of information for scene more details [20].
    [Show full text]
  • Experimental Validation of Autodesk 3Ds Max Design 2009 and Daysim
    Experimental Validation of Autodesk® 3ds Max® Design 2009 and Daysim 3.0 NRC Project # B3241 Submitted to: Autodesk Canada Co. Media & Entertainment Submitted by: Christoph Reinhart1,2 1) National Research Council Canada - Institute for Research in Construction (NRC-IRC) Ottawa, ON K1A 0R6, Canada (2001-2008) 2) Harvard University, Graduate School of Design Cambridge, MA 02138, USA (2008 - ) February 12, 2009 B3241.1 Page 1 Table of Contents Abstract ....................................................................................................................................... 3 1 Introduction .......................................................................................................................... 3 2 Methodology ......................................................................................................................... 5 2.1 Daylighting Test Cases .................................................................................................... 5 2.2 Daysim Simulations ....................................................................................................... 12 2.3 Autodesk 3ds Max Design Simulations ........................................................................... 13 3 Results ................................................................................................................................ 14 3.1 Façade Illuminances ...................................................................................................... 14 3.2 Base Case (TC1) and Lightshelf (TC2)
    [Show full text]
  • Kaitlyn Watkins' Resume WS
    Kaitlyn Watkins Animator/Illustrator/Character Designer I am a self-motivated, goal-oriented, and diligent animator, illustrator, and character designer. Whether working on academic, extracurricular, or professional projects, I apply proven critical thinking, problem- solving, and communication skills, which I hope to leverage into the animation industry. https://katiesartofanimation.com Willing to relocate @kats.art.of.animation Experience Educational History NASA VFX Launch Project | 2017 Digital Animation & Visual Effects School | Lighting Artist, Animator, Compositor 2016 - 2017 Created VFX project using: Maya, Modo, ZBrush, Nuke, Hard surface and organic modeling, Retropology, Adobe Photoshop, and Adobe Premiere and UV mapping Communicated with NASA, VFX artist supervisor, and Digital sculpting, node and layer based compositing, VFX team to ensure tasks were completed in an rotoscoping, look dev/stereoscopic techniques accurate and efficient manner Camera animation, rigging, character animation, Makeup Artist | 2015 - Present facial animation Freelancer Lighting, shading, texturing Apply makeup to enhance, and/or alter the appearance of people appearing in productions. Scene breakouts, rendered passes and multi-passes Volunteer | 2015 - Present Green screen keying, color grading, matte painting SCAD Film Fest 2D/3D tracking, set extensions Breast Cancer Awareness Multiple Sclerosis Awareness Skills Summary Awards/Art Exhibits/ Accolades Won 5 Emmy Awards for NASA Class Project Creation of characters, expression sheets, and special
    [Show full text]
  • Open Source Software for Daylighting Analysis of Architectural 3D Models
    19th International Congress on Modelling and Simulation, Perth, Australia, 12–16 December 2011 http://mssanz.org.au/modsim2011 Open Source Software for Daylighting Analysis of Architectural 3D Models Terrance Mc Minn a a Curtin University of Technology School of Built Environment, Perth, Australia Email: [email protected] Abstract:This paper examines the viability of using open source software for the architectural analysis of solar access and over shading of building projects. For this paper open source software also includes freely available closed source software. The Computer Aided Design software – Google SketchUp (Free) while not open source, is included as it is freely available, though with restricted import and export abilities. A range of software tools are used to provide an effective procedure to aid the Architect in understanding the scope of sun penetration and overshadowing on a site and within a project. The technique can be also used lighting analysis of both external (to the building) as well as for internal spaces. An architectural model built in SketchUp (free) CAD software is exported in two different forms for the Radiance Lighting Simulation Suite to provide the lighting analysis. The different exports formats allow the 3D CAD model to be accessed directly via Radiance for full lighting analysis or via the Blender Animation program for a graphical user interface limited option analysis. The Blender Modelling Environment for Architecture (BlendME) add-on exports the model and runs Radiance in the background. Keywords:Lighting Simulation, Open Source Software 3226 McMinn, T., Open Source Software for Daylighting Analysis of Architectural 3D Models INTRODUCTION The use of daylight in buildings has the potential for reduction in the energy demands and increasing thermal comfort and well being of the buildings occupants (Cutler, Sheng, Martin, Glaser, et al., 2008), (Webb, 2006), (Mc Minn & Karol, 2010), (Yancey, n.d.) and others.
    [Show full text]
  • An Advanced Path Tracing Architecture for Movie Rendering
    RenderMan: An Advanced Path Tracing Architecture for Movie Rendering PER CHRISTENSEN, JULIAN FONG, JONATHAN SHADE, WAYNE WOOTEN, BRENDEN SCHUBERT, ANDREW KENSLER, STEPHEN FRIEDMAN, CHARLIE KILPATRICK, CLIFF RAMSHAW, MARC BAN- NISTER, BRENTON RAYNER, JONATHAN BROUILLAT, and MAX LIANI, Pixar Animation Studios Fig. 1. Path-traced images rendered with RenderMan: Dory and Hank from Finding Dory (© 2016 Disney•Pixar). McQueen’s crash in Cars 3 (© 2017 Disney•Pixar). Shere Khan from Disney’s The Jungle Book (© 2016 Disney). A destroyer and the Death Star from Lucasfilm’s Rogue One: A Star Wars Story (© & ™ 2016 Lucasfilm Ltd. All rights reserved. Used under authorization.) Pixar’s RenderMan renderer is used to render all of Pixar’s films, and by many 1 INTRODUCTION film studios to render visual effects for live-action movies. RenderMan started Pixar’s movies and short films are all rendered with RenderMan. as a scanline renderer based on the Reyes algorithm, and was extended over The first computer-generated (CG) animated feature film, Toy Story, the years with ray tracing and several global illumination algorithms. was rendered with an early version of RenderMan in 1995. The most This paper describes the modern version of RenderMan, a new architec- ture for an extensible and programmable path tracer with many features recent Pixar movies – Finding Dory, Cars 3, and Coco – were rendered that are essential to handle the fiercely complex scenes in movie production. using RenderMan’s modern path tracing architecture. The two left Users can write their own materials using a bxdf interface, and their own images in Figure 1 show high-quality rendering of two challenging light transport algorithms using an integrator interface – or they can use the CG movie scenes with many bounces of specular reflections and materials and light transport algorithms provided with RenderMan.
    [Show full text]
  • Getting Started (Pdf)
    GETTING STARTED PHOTO REALISTIC RENDERS OF YOUR 3D MODELS Available for Ver. 1.01 © Kerkythea 2008 Echo Date: April 24th, 2008 GETTING STARTED Page 1 of 41 Written by: The KT Team GETTING STARTED Preface: Kerkythea is a standalone render engine, using physically accurate materials and lights, aiming for the best quality rendering in the most efficient timeframe. The target of Kerkythea is to simplify the task of quality rendering by providing the necessary tools to automate scene setup, such as staging using the GL real-time viewer, material editor, general/render settings, editors, etc., under a common interface. Reaching now the 4th year of development and gaining popularity, I want to believe that KT can now be considered among the top freeware/open source render engines and can be used for both academic and commercial purposes. In the beginning of 2008, we have a strong and rapidly growing community and a website that is more "alive" than ever! KT2008 Echo is very powerful release with a lot of improvements. Kerkythea has grown constantly over the last year, growing into a standard rendering application among architectural studios and extensively used within educational institutes. Of course there are a lot of things that can be added and improved. But we are really proud of reaching a high quality and stable application that is more than usable for commercial purposes with an amazing zero cost! Ioannis Pantazopoulos January 2008 Like the heading is saying, this is a Getting Started “step-by-step guide” and it’s designed to get you started using Kerkythea 2008 Echo.
    [Show full text]
  • Sony Pictures Imageworks Arnold
    Sony Pictures Imageworks Arnold CHRISTOPHER KULLA, Sony Pictures Imageworks ALEJANDRO CONTY, Sony Pictures Imageworks CLIFFORD STEIN, Sony Pictures Imageworks LARRY GRITZ, Sony Pictures Imageworks Fig. 1. Sony Imageworks has been using path tracing in production for over a decade: (a) Monster House (©2006 Columbia Pictures Industries, Inc. All rights reserved); (b) Men in Black III (©2012 Columbia Pictures Industries, Inc. All Rights Reserved.) (c) Smurfs: The Lost Village (©2017 Columbia Pictures Industries, Inc. and Sony Pictures Animation Inc. All rights reserved.) Sony Imageworks’ implementation of the Arnold renderer is a fork of the and robustness of path tracing indicated to the studio there was commercial product of the same name, which has evolved independently potential to revisit the basic architecture of a production renderer since around 2009. This paper focuses on the design choices that are unique which had not evolved much since the seminal Reyes paper [Cook to this version and have tailored the renderer to the specic requirements of et al. 1987]. lm rendering at our studio. We detail our approach to subdivision surface After an initial period of co-development with Solid Angle, we tessellation, hair rendering, sampling and variance reduction techniques, decided to pursue the evolution of the Arnold renderer indepen- as well as a description of our open source texturing and shading language components. We also discuss some ideas we once implemented but have dently from the commercially available product. This motivation since discarded to highlight the evolution of the software over the years. is twofold. The rst is simply pragmatic: software development in service of lm production must be responsive to tight deadlines CCS Concepts: • Computing methodologies → Ray tracing; (less the lm release date than internal deadlines determined by General Terms: Graphics, Systems, Rendering the production schedule).
    [Show full text]
  • Semantic Segmentation of Surface from Lidar Point Cloud 3
    Noname manuscript No. (will be inserted by the editor) Semantic Segmentation of Surface from Lidar Point Cloud Aritra Mukherjee1 · Sourya Dipta Das2 · Jasorsi Ghosh2 · Ananda S. Chowdhury2 · Sanjoy Kumar Saha1 Received: date / Accepted: date Abstract In the field of SLAM (Simultaneous Localization And Mapping) for robot navigation, mapping the environment is an important task. In this regard the Lidar sensor can produce near accurate 3D map of the environment in the format of point cloud, in real time. Though the data is adequate for extracting information related to SLAM, processing millions of points in the point cloud is computationally quite expensive. The methodology presented proposes a fast al- gorithm that can be used to extract semantically labelled surface segments from the cloud, in real time, for direct navigational use or higher level contextual scene reconstruction. First, a single scan from a spinning Lidar is used to generate a mesh of subsampled cloud points online. The generated mesh is further used for surface normal computation of those points on the basis of which surface segments are estimated. A novel descriptor to represent the surface segments is proposed and utilized to determine the surface class of the segments (semantic label) with the help of classifier. These semantic surface segments can be further utilized for geometric reconstruction of objects in the scene, or can be used for optimized tra- jectory planning by a robot. The proposed methodology is compared with number of point cloud segmentation methods and state of the art semantic segmentation methods to emphasize its efficacy in terms of speed and accuracy.
    [Show full text]
  • 3D Scene (Wire) Final Render
    Are you looking for already created 3D packaging models and design, where you don’t need to worry about complicated studio and light settings? We are providing you opportunity to give up with long hour technical studio settings. Take a look at this fully textured scenes with professional shaders and lighting ready to use. Get your own professional packaging models, scenes and designs for personal or commercial use! Final Render LILY Product Id Model_0000180 What’s Included: • Model in OBJ and FBX fi le formats • Cinema 4D R16 Scene File 3D Scene (Wire) • Textures, HDRI, Materials • Renders FBX and OBJ file formats are compatible with software: All Autodesk programs, such as: Maya, 3D Max, Mudbox, All Models Are High Poly AutoCAD, Inventor, MotionBuilder, Revit, Softimage, Alias etc.; ZBrush, MODO, 3D Coat, Adobe Photoshop, Keyshot, Rhinoceros, SketchUp, NewTek Lightwave, Vue xStream, SolidWorks, Houdini, Blender, Unreal Engine etc. ©2016 WaldemarArt Design Studio/GERMANY. All rights reserved. | www.Waldemar-Art.com All *.C4d, *.3Ds, *.FBX, *.OBJ, *.Ai, *.Tiff , *.Pdf and bitmaps fi les included on this collection with data are an-integral part of this design/models and the resale of this data is strictly prohibited. All models and graphics can be used for commercial purposes only by owners who bought this collection. The sharing of collection is strictly prohibited unless that user has written autho- rization from WaldemarArt Design Studio. By downloading projects of WaldemarArt Design Studio from our site, you agree to the Terms and Conditions. ROYALTY-FREE LICENSE.
    [Show full text]
  • 2014 3-4 Acta Graphica.Indd
    Vidmar et al.: Performance Assessment of Three Rendering..., acta graphica 25(2014)3–4, 101–114 author viewpoint acta graphica 234 Performance Assessment of Three Rendering Engines in 3D Computer Graphics Software Authors Žan Vidmar, Aleš Hladnik, Helena Gabrijelčič Tomc* University of Ljubljana Faculty of Natural Sciences and Engineering Slovenia *E-mail: [email protected] Abstract: The aim of the research was the determination of testing conditions and visual and numerical evaluation of renderings made with three different rendering engines in Maya software, which is widely used for educational and computer art purposes. In the theoretical part the overview of light phenomena and their simulation in virtual space is presented. This is followed by a detailed presentation of the main rendering methods and the results and limitations of their applications to 3D ob- jects. At the end of the theoretical part the importance of a proper testing scene and especially the role of Cornell box are explained. In the experimental part the terms and conditions as well as hardware and software used for the research are presented. This is followed by a description of the procedures, where we focused on the rendering quality and time, which enabled the comparison of settings of different render engines and determination of conditions for further rendering of testing scenes. The experimental part continued with rendering a variety of simple virtual scenes including Cornell box and virtual object with different materials and colours. Apart from visual evaluation, which was the starting point for comparison of renderings, a procedure for numerical estimation and colour deviations of ren- derings using the selected regions of interest in the final images is presented.
    [Show full text]
  • Simulations and Visualisations with the VI-Suite
    Simulations and Visualisations with the VI-Suite For VI-Suite Version 0.6 (document version 0.6.0.1) Dr Ryan Southall - School of Architecture & Design - University of Brighton. Contents 1 Introduction .............................................. 3 2 Installation .............................................. 3 3 Configuration ............................................. 4 4 The VI-Suite Interface ........................................ 5 4.1 Changes from v0.4 ...................................... 5 4.2 Collections .......................................... 5 4.3 File structure ......................................... 5 4.4 The Node System ....................................... 5 4.4.1 VI-Suite Nodes .................................. 6 4.4.2 EnVi Material nodes ................................ 7 4.4.3 EnVi Network nodes ................................ 7 4.5 Other panels ......................................... 8 4.5.1 Object properties panel .............................. 8 4.5.2 Material properties panel ............................. 9 4.5.3 Visualisation panel ................................. 10 4.6 Common Nodes ....................................... 11 4.6.1 The VI Location node ............................... 11 4.6.2 The ASC Import node ............................... 11 4.6.3 The VI Chart node ................................. 12 4.6.4 The VI CSV Export node ............................. 12 4.6.5 The Text Edit Node ................................ 13 4.6.6 The VI Metrics node ................................ 13 4.7 Specific
    [Show full text]
  • Information Systems and Computer Engineering
    A Sketch-Based Interface for Modeling Articulated Figures João Pedro Borges Pereira Thesis to obtain the Master of Science Degree in: Information Systems and Computer Engineering Supervisors: Prof. Dr. João António Madeiras Pereira, Dr. Daniel Simões Lopes Examination Committee Chairperson: Prof. Dr. João Emílio Segurado Pavão Martins Supervisor: Prof. Dr. João António Madeiras Pereira Members of the Committee: Prof. Dr. Alfredo Manuel Dos Santos Ferreira Júnior November 2017 Acknowledgments In this section, I would like to like to thank each and every one that in some way contributed to the accomplish- ment of this thesis. To my supervisors and professors of the same department, Prof. João Madeiras Pereira for giving me the chance to expand on this subject, giving me guidelines as well as walking through with me the last pieces of my work. To Prof. Joaquim Jorge that helped me to decipher the mysteries of the CALI library and with the design of the forms for the user study. To Dr. Daniel Simões Lopes, with whom along the whole year I made weekly meetings and helped define the next steps to take, besides reading and looking into my work. He was an exceptional help to my project; for that I am deeply grateful. To national funds, the Portuguese Foundation for Science and Technology with reference IT-MEDEX PTDC/EEI-SII/6038/2014, for the financial support. To my parents, that along the years supported me in having a multilingual education, helped me hone my skills by simply being present at every step of the way and allowing me to into programs like Erasmus.
    [Show full text]