3D Reconstruction Using Multiple Depth Cameras

Total Page:16

File Type:pdf, Size:1020Kb

3D Reconstruction Using Multiple Depth Cameras 3D Reconstruction Using Multiple Depth Cameras Maximum Wilder-Smith California State Polytechnic University, Pomona [email protected] Abstract—In this paper, we present an approach cameras, but use last gen hardware to do so [2]. ​ to creating 3D models, through the RGB-D data This means they miss out on leveraging the captured by multiple aligned Azure Kinects newer capabilities and high resolutions of the surrounding an object. To improve the quality of newer Azure Kinect cameras. In terms of the point clouds of the entire object, a reconstruction and alignment, the Open3D background subtraction method is employed to library contains fast implementations of a isolate the high fidelity point clouds from scan variety of algorithms [3]. This library features frames. We then construct an aligned 3D surface extensive I/O support as well as point cloud and using the point clouds’ geometry. mesh operations. Keywords—Point cloud registration, Azure In this project, we present a 3D scanning ​ Kinects, 3D reconstruction pipeline that digitizes a real object using the RGB-D data captured by multiple depth I. INTRODUCTION cameras. In particular, we place multiple depth cameras at different angles around a real object. Creating realistic 3D models of physical objects While scanning, the depth cameras are plays an important role in a variety of real-world ​ synchronized with overlapping views to capture applications including game development, RGB-D images of the object at different computer generated art, and digital visualization. viewpoints. This helps to reduce the number of 3D scanning through depth cameras is one of the scan operations required for complete scanning popular ways to digitize an object, which allows coverage to estimate the shape and surface people to rapidly obtain the geometry and appearance of the scanned object. The result texture information of real objects, without demonstrates the effectiveness of the presented having to manually model every detail, and paint pipeline in reconstructing a 3D surface from a the textures from scratch. single scan from each camera. Much of the current research into this II. METHODOLOGY application uses the older scanners, such as the Kinect v2, or present methods based around a The proposed pipeline in this project consists of single camera. One of the most popular three main stages: initial alignment, cropping, applications for 3D reconstruction using Kinects and surface reconstruction. is the Kinfu application [1]. This program builds textured models from a single Kinect moving A. Initial Alignment through a space. While this program creates a complete textured model, it does not leverage The initial alignment stage involves taking a multiple capture angles from multiple cameras. single capture from each camera in the system. While other works such as Alexiadis, Zarpalas, We use two Azure Kinects, and each is and Daras, present systems that utilize multiple connected directly to the computer. As each camera uses an infrared projection to register depth, there needs to be some delay between each camera’s capture to avoid interference. When testing with more cameras, it is preferable to use a daisy chain or subordinate configuration between the cameras with a preprogrammed capture offset [4]. The captured data is loaded into the Open3D library where the data is decoded into the RGBDImage structure [3]. Figure 2. Misaligned point clouds from the two cameras. These point clouds can then be aligned using a variety of techniques. The first is Fast Global Registration [5]. This method allows a rough alignment between the point clouds using common features. The alignment transformation from this step is then passed as the initial Figure 1. RGBDImages from both cameras. The left transform matrix for the Colored ICP shows the color image captured, which right side registration stage [6]. This method of iterative shows the depth data captured. closest point registration uses the color data as well as the geometry of the two point clouds to From these RGBDImages we read each Kinect’s more accurately align the point clouds. The intrinsic matrix to obtain the transformations more overlapping and distinguishing colored needed to combine the color data and the depth objects between the two point clouds, the better data into a single colored point cloud with this registration is. Below is the output point accurate depth information. As the intrinsic cloud after performing colored ICP. matrix is slightly different for each camera, each both must be polled for the calibration data. Using the intrinsic matrix, the RGBDImages are converted into colored point clouds. These point clouds display the geometry from the depth data as a series of points in 3D space, with each point’s color being determined by the color data from the camera. As the point density for the Azure Kinect is very high, the point clouds often appear as solid objects in the visualizer. When first loaded into the visualizer, there is no data about how the point clouds line up, meaning their geometries are misaligned. This is shown Figure 3. Point clouds are aligned after Fast Global in Figure 2. Registration and Colored ICP registration. The alignment shown above has an accuracy of 0.5420. It is important to note that as each camera is seeing slightly different images, the two point clouds can not 100% match. The more extreme the angle and placement is between the cameras in the setup, the lower the initial alignment accuracies will be. Ensuring the scenes captured by each camera overlap a fair amount and are equally exposed as to provide similar colors between the point clouds, helps to achieve a more accurate alignment. After aligning the point clouds, we merge them into Figure 4. Bounding box around the total capture one point cloud by combining the points and performing a small voxel down sampling to ensure duplicated points are removed. If the setup has more than two cameras, the underlying alignment and merging process would be continued using the merged point cloud and the initial capture from each successive camera. This is repeated until a transformation matrix is calculated for each camera and the merged point cloud contains the sum of the captures from all of the cameras. If the scanning setup requires cameras to be moved to capture different angles of the subject without the use of additional cameras, then the transformation matrices are discarded, and this alignment process can be repeated. Figure 5. Bounding box moved around the target B. Cropping The next stage is to isolate the subject from the Once the bounding box has been closed around background. To this end, a bounding box is the subject being scanned, the point cloud can be needed to be generated around the subject. This cropped with the built in method. This removes bounding box can be found by performing object the background and additional point cloud points segmentation, or by manually closing in on the that were only needed to get the alignment of the subject. For the following example manually cameras. This leaves only the subject's point subject selection is used as it is more reliable for cloud, and some floor base around it. complex objects and incomplete point clouds. To begin selecting the object a bounding box is generated around the entire point cloud. The user can then move and scale this bounding box to close in on the subject. Once the point cloud has been cleaned up, we begin surface reconstruction to generate a triangle mesh based on the colored point cloud. We investigate three different surface reconstruction algorithms based on the Open3D library, i.e., Poisson reconstruction [7], Alpha Shape reconstruction [8], and Ball Pivot reconstruction [9]. Depending on the scanning situation certain methods may work better than others. Below is a comparison for the previous scans. Figure 6. The cropped point cloud Cropping the point cloud can be useful if the subject is spinning or on a turntable in which case further alignment can be performed between captures, that ignores the features and alignment of the background, and just aligns the captures of the subject (See Discussion A.). Figure 8. Poisson reconstruction C. Surface Reconstruction The final stage of the reconstruction is creating a solid surface from the point cloud. Once the subject has been isolated through cropping, the pipeline will perform post-processing on the pointcloud to remove outlier points as well as any extra or internal points. It is often useful to calculate the normal of the point cloud data for the surface reconstruction algorithms Figure 9. Alpha Shape reconstruction Figure 7. Cleaned up point cloud which has been further cropped. Figure 10. Ball Pivot reconstruction One can find that for the above example, for reconstruction scenarios where there are Poisson produces a smooth watertight mesh at chaotic changes in the scene such as moving the cost of surface resolution. Alpha Shape was objects. unable to compute an efficient convex mesh resulting in an unusable mesh. Ball Pivot was C. Parallelization able to produce the highest resolution mesh of This final extension of the program would be a the point cloud, though it had missing faces and series of optimizations to allow the program to took an incredibly long time to calculate. more efficiently utilize multi-core processors III. DISCUSSION and compute clusters. While the core library, Open3D, already performs some parallelized A. Turntable Sequence Processing optimizations in visualization and reconstruction techniques, this milestone would seek to turn The next milestone for this program will be this program into a more scalable solution for turntable sequence processing. Following the 3D reconstruction. cropping stage, the bounding box as well as the transformation matrices for the cameras will be IV. CONCLUSION saved for quick use in aligning a capture sequence.
Recommended publications
  • BOOK of NEWS Al in Business
    BOOK OF NEWS Microsoft Inspire July 14 – 18, 2019 Las Vegas, NV Contents Foreword by Gavriella Schuster 4 Inspire Overview and How to Watch 5 Chapter 1 6 Customer and Partner Enablement Item 1.1 Partner program and investments update Item 1.2 Microsoft Azure Lighthouse Item 1.3 Microsoft Edge Enterprise Readiness Item 1.4 Microsoft Intelligent Security Association expands Chapter 2 9 Microsoft Business Applications | ISV Opportunity Item 2.1 New Microsoft Business Applications ISV Connect program and technology industry innovation for ISVs Item 2.2 Industry accelerators for ISVs Item 2.3 Latest version of the Dynamics 365 Nonprofit Accelerator Chapter 3 13 Marketplace Item 3.1 New monetization models, rewards program and route to market in commercial marketplace Chapter 4 15 Microsoft Azure and AI Item 4.1 AI for Cultural Heritage Item 4.2 Cloud Adoption Framework Item 4.3 AI Accelerate Program Item 4.4 New Azure migration capabilities Item 4.5 Azure Migration Program Item 4.6 Azure Data Box Heavy is now generally available; Data Box and Data Box Disk regions are expanded Item 4.7 Open source Quantum Development Kit Item 4.8 Azure Kinect DK General Availability Item 4.9 ServiceNow makes Azure Cloud its preferred cloud platform for highly regulated industries Contents | 2 Contents Chapter 5 20 Databases and Analytics Item 5.1 Blob API interoperability with Azure Data Lake Storage Gen2 Public Preview Item 5.2 Azure Data Share, new analytics service, announced for enterprises Chapter 6 22 Microsoft Teams Item 6.1 Microsoft Teams reaches 13
    [Show full text]
  • Microsoft 2012 Citizenship Report
    Citizenship at Microsoft Our Company Serving Communities Working Responsibly About this Report Microsoft 2012 Citizenship Report Microsoft 2012 Citizenship Report 01 Contents Citizenship at Microsoft Serving Communities Working Responsibly About this Report 3 Serving communities 14 Creating opportunities for youth 46 Our people 85 Reporting year 4 Working responsibly 15 Empowering youth through 47 Compensation and benefits 85 Scope 4 Citizenship governance education and technology 48 Diversity and inclusion 85 Additional reporting 5 Setting priorities and 16 Inspiring young imaginations 50 Training and development 85 Feedback stakeholder engagement 18 Realizing potential with new skills 51 Health and safety 86 United Nations Global Compact 5 External frameworks 20 Supporting youth-focused 53 Environment 6 FY12 highlights and achievements nonprofits 54 Impact of our operations 23 Empowering nonprofits 58 Technology for the environment 24 Donating software to nonprofits Our Company worldwide 61 Human rights 26 Providing hardware to more people 62 Affirming our commitment 28 Sharing knowledge to build capacity 64 Privacy and data security 8 Our business 28 Solutions in action 65 Online safety 8 Where we are 67 Freedom of expression 8 Engaging our customers 31 Employee giving and partners 32 Helping employees make 69 Responsible sourcing 10 Our products a difference 71 Hardware production 11 Investing in innovation 73 Conflict minerals 36 Humanitarian response 74 Expanding our efforts 37 Providing assistance in times of need 76 Governance 40 Accessibility 77 Corporate governance 41 Empowering people with disabilities 79 Maintaining strong practices and performance 42 Engaging students with special needs 80 Public policy engagement 44 Improving seniors’ well-being 83 Compliance Cover: Participants at the 2012 Imagine Cup, Sydney, Australia.
    [Show full text]
  • Azure-Kinect and Augmented Reality for Learning Basic Mathematics - a Case Study
    Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 30 September 2020 doi:10.20944/preprints202009.0752.v1 Azure-Kinect and Augmented Reality for learning Basic Mathematics - A case study Raúl Lozada-Yánez 1, 2, Nora La-Serna Palomino 1, Diego Veloz-Chérrez 2, Fernando Molina-Granja 3 & Juan Carlos Santillán-Lima 4 1 Universidad Nacional Mayor de San Marcos, Facultad de Ingeniería de Sistemas e Informática, [email protected], [email protected] 2 Escuela Superior Politécnica de Chimborazo, [email protected],Riobamba, Ecuador Facultad de Informática en Electrónica, [email protected], [email protected] 3 Universidad Nacional de Chimborazo, Riobamba, Ecuador Facultad de Ingeniería,[email protected] 4 Universidad Nacional de La Plata, Ciudad de la Plata, Argentina Facultad de Informática - LEICI - Instituto de Investigaciones en Electrónica, Control y Procesamiento de Se- ñales. Facultad de Ingeniería, [email protected], [email protected] *Correspondence: [email protected]; [email protected] Abstract: The way in which the human being learns certain complex contents has always been a focus of interest and a challenge for researchers. Given the fact that children's cognitive abilities do not fully develop until a certain age, this topic is particularly important in the young children's learning scope as they do not correctly and easily learn some content of abstract nature, such as contents in math class. This work presents the results of the use of an application called "Mathematics Learning System with Augmented Reality based on Kinect" (SAM-RAK by its acronym in Spanish), which was designed to cover basic topics of mathematics in the Basic General Education level (EGB by its acronym in Spanish) in Ecuador.
    [Show full text]
  • Azure Kinect Examples for Unity
    Azure Kinect Examples for Unity Copyright © RF Solutions. All rights reserved. Azure Kinect Examples for Unity Table of contents Introduction ........................................................................................................................ 3 How to Run ........................................................................................................................ 3 Azure Kinect SDKs ............................................................................................................. 3 Kinect-v2 SDK ................................................................................................................... 3 RealSense SDK .................................................................................................................. 4 Demo Scenes ...................................................................................................................... 4 VFX Point-Cloud Demo ......................................................................................................... 8 The Kinect Manager ............................................................................................................. 9 Sensor Interfaces .............................................................................................................. 10 General-Purpose Components ............................................................................................. 10 Demo-Specific Components ................................................................................................
    [Show full text]
  • Azure Kinect Body Tracking Sdk Reference
    Azure Kinect Body Tracking Sdk Reference Curatorial Mic unburden vivaciously and ineloquently, she imbody her planometer feminizes digitally. Participating Adam intermediates some Ealing and squegs his mooters so darkly! Right-hand and flutiest Zebulon slabber her sink menaced roomily or sightsee thriftily, is Sheff turdine? It is a meter away team commit to the kinect body tracker is not putting off Azure Kinect body tracking currently uses a DNN model to impair the skeleton the. Azure Kinect Masterclass Angles LightBuzz. Slightly off before we actually built its own priorities, or ideas are multiple example applications and you just slightly off before it also viewed these references. Net framework development, but i wanted with. My money when depth. Press with a reference documentation to refer to develop their imaginations come to determine whether we try again later one without my proyect there are categorized as sensor? Customers are tracked joints and review few extra adapters in another tab for tracking kinect dk has assigned different container format. MicrosoftAzureKinectBodyTracking 101 NuGet Gallery. The reference pages and kinect sdk is based on how developers stopped making sure that were used on your computer. Eddie using azure kinect or. According to sow body index frames coming from free body tracking SDK More. See details on hardware recommendations in Microsoft's Azure Kinect documentation. For more information visit the official documentation of Azure Kinect DK. Add C wrapper for Azure Kinect Body Tracking SDK Azure. Kinect camera to play them with so. If the Microsoft Azure Kinect Sensor SDK provides a reference in Python it wound be.
    [Show full text]
  • Application Development with Azure
    Application Development with Azure Karim Vaes Specialist – Azure Application Development @kvaes Agenda • Digital Transformation, powered by Application Innovation • Developer Toolchain • App Service • Integration Services Digital Transformation Powered by Application Innovation Digital transformation 91% Digital of business leaders see Digital Transformation as a way of sparking Transformation innovation and finding efficiencies1 A journey with one destination but different paths 85% say they must offer digital services or become irrelevant2 1 ISACA: Information Systems Audit and Control Association, 2018 2 Couchbase: Couchbase Survey, August 2018 1 Data: Capture digital signal from across business Consumer Reports review indicate braking issue with Model 3 Vehicle telemetry shows brake performance across fleet 2 Insight: Connect and synthesize data Car telemetry for suspect cars analyzed to understand issue Tesla identifies fix to improve stopping distance Engage Transform customers products 3 Action: Improve business outcomes Car braking software updated over-the-air to fix issue Tesla closes the loop with consumer reports and review is updated 7,0% 6,0% 5,0% 4,0% 3,0% 2,0% 1,0% 0,0% -1,0% -2,0% software Digital DNA Toolchain Overview World’s most comprehensive developer toolchain Azure Azure Stack Azure Data Box Azure Sphere Azure Kinect HoloLens Web Databases Mobile Analytics Tools Mixed Reality AI + Machine Learning Visual Studio Containers Internet of Things Azure Devops Events + Integration Media GitHub PowerApps Power BI Compute
    [Show full text]
  • About Azure Kinect DK
    Contents Azure Kinect DK documentation Overview About Azure Kinect DK Quickstarts Set up Azure Kinect DK Record sensor streams to a file Build your first application Set up Body Tracking SDK Build your first body tracking application Concepts Depth camera Coordinate systems Body tracking joints Body tracking index map How-to guides Use Sensor SDK Azure Kinect Sensor SDK Find then open device Retrieve images Retrieve IMU samples Access microphone Use image transformations Use calibration functions Capture device synchronization Record and playback Use Body Tracking SDK Get body tracking results Access data in body frame Add Azure Kinect library to a project Update Azure Kinect firmware Use recorder with external synchronized units Tools Azure Kinect viewer Azure Kinect recorder Azure Kinect firmware tool Resources Download the Sensor SDK Download the Body Tracking SDK System requirements Hardware specification Multi-camera synchronization Compare to Kinect for Windows Reset Azure Kinect DK Azure Kinect support Azure Kinect troubleshooting Warranties, extended service plans, and Terms & Conditions Safety information References Sensor API Body tracking API Record file format About Azure Kinect DK 11/12/2019 • 2 minutes to read • Edit Online Azure Kinect DK is a developer kit with advanced AI sensors that provide sophisticated computer vision and speech models. Kinect contains a depth sensor, spatial microphone array with a video camera, and orientation sensor as an all in-one small device with multiple modes, options, and software development kits (SDKs). It is available for purchase in Microsoft online store. The Azure Kinect DK development environment consists of the following multiple SDKs: Sensor SDK for low-level sensor and device access.
    [Show full text]
  • Azure Cloud Reference Architecture
    Azure Cloud Reference Architecture Granularly demeaning, Marius foreruns accordances and squilgeed rectifications. Tabor jollifies her dumdums tryingly, she synonymized it taperingly. Riding Trevor usually reverberated some commutations or untunes stichometrically. As worker machines and freedom to provide you can consent prior to provide more. Cv chasing the avi vantage polls the! The virtual datacenter giver faster viewing experience. Cpu and cloud computing, the avi deployment options, cloud reference architecture? What is through the package to disable cookies on tech community founder, if designed with. Ensure that companies he has been made for. Use azure reference for installation process for azure cloud reference architecture? This service engines, and how to run dedicated server and some documentation are authenticating to allow for all what does the! This guide for their core infrastructure to share used in microsoft azure public cloud services available qualities that are the old and across multiple? Imply microsoft azure reference architectures. Availability within azure cloud architectures for creating and i have various platforms. Guides and over again without affecting the protected resource. Let us collaborate people around the. As required by the most amazing offers the same user account to popular in this data center you can configure or workspace one access. Learn more implementations on code is cloud reference architecture related workloads or application deployment and agreed upon login. What is also use artificial intelligence capabilities, or registered trademarks are. It balances a part of deployment and dynamic workload islands deployed to explain the kubernetes cluster scheduler orchestration for automating and to your business will not support.
    [Show full text]
  • Postural Control Assessment Via Microsoft Azure Kinect DK: an Evaluation Study
    Computer Methods and Programs in Biomedicine 209 (2021) 106324 Contents lists available at ScienceDirect Computer Methods and Programs in Biomedicine journal homepage: www.elsevier.com/locate/cmpb Postural control assessment via Microsoft Azure Kinect DK: An evaluation study Mauro Antico a, Nicoletta Balletti b,c, Gennaro Laudato c, Aldo Lazich b,d, ∗ Marco Notarantonio b, Rocco Oliveto c,e, , Stefano Ricciardi c, Simone Scalabrino c,e, Jonathan Simeone e a Atlantica spa, Rome, Italy b Defense Veterans Center, Ministry of Defense, Rome, Italy c STAKE Lab, University of Molise, Pesche, IS, Italy d DIAG, University of Rome “Sapienza”, Rome, Italy e Datasound srl, Pesche, IS, Italy a r t i c l e i n f o a b s t r a c t Article history: Background and objective: Equipments generally used for entertainment, such as Microsoft Kinect, have Received 9 March 2021 been widely used for postural control as well. Such systems—compared to professional motion tracking Accepted 25 July 2021 systems—allow to obtain non-invasive and low-cost tracking. This makes them particularly suitable for the implementation of home rehabilitation systems. Microsoft has recently released a new version of Keywords: Kinect, namely Azure Kinect DK, that is meant for developers, not consumers, and it has been specifically Postural control designed to implement professional applications. The hardware of this new version of the Kinect has Microsoft Azure Kinect been substantially improved as compared with previous versions. However, the accuracy of the Azure Vicon 3D Kinect DK has not been evaluated yet in the context of the assessment of postural control as done for its Empirical studies predecessors.
    [Show full text]
  • Microsoft Ignite 2019
    Microsoft Ignitessa julkaistut infrapalvelut ja -parannukset 1 Mika Seitsonen • @ Sovelto • Yhteystiedot • Johtava konsultti • e-mail [email protected] • Twitter @MikaSeitsonen • Muutama fakta • https://www.linkedin.com/in/MikaSeitsonen • M.Sc., University of Nottingham, U.K. • DI, Lappeenrannan teknillinen yliopisto • Harrastukset • Co-author of "Inside Active Directory" • Kuntourheilu (hiihto, suunnistus, pyöräily) • Moottoriurheil(ija)un innokas seuraaja • Sertifioinnit • Microsoft Certified Trainer (MCT) vuodesta 1997, Microsoft Certification ID 414xxx • MCSE: Productivity • MCSA: Office 365, Windows 2008, Windows 7 • Microsoft Certified: Azure Fundamentals, Azure Administrator Associate • ITIL Foundation 2011 Agenda • Yleiset huomiot tapahtumasta • Azuren tärkeimmät julkistukset • Microsoft 365:n tärkeimmät julkistukset • Osaamisen kehittäminen ja ylläpitäminen3 Osallistuako läsnä vai etänä? LÄSNÄ ETÄNÄ + verkostoituminen + päivät käytettävissä + reflektointi ja yhdessä muuhun oppiminen + univelasta aikaeroa + tiimiytyminen helpompi palautua - aikaero + helpompi keskittyä - matkustamiseen - interaktio kuluva aika - kustannukset 4 Yleiset huomiot tapahtumasta 5 Microsoft Ignite 2019 • Orlando - OCCC (Orange County Convention Center) 4.-8.11. • USA:n toiseksi suurin konferenssikeskus • n. 25 000 osallistujaa + 5 000 Microsoftin ja näytteilleasettajien edustajaa • Näistä n. 200 suomalaisia • Book of News tällä kertaa 87 sivua (vastaava vuoden 2018 Ignitessa: 27 s.) • Ensi vuoden tapahtuma vielä kysymysmerkki ajankohdan ja sijainnin
    [Show full text]
  • 3D Modeling and Webvr Implementation Using Azure Kinect, Open3d, and Three.Js
    3D Modeling and WebVR Implementation using Azure Kinect, Open3D, and Three.js ◦Won Joon Yun and ◦;zJoongheon Kim ◦School of Electrical Engineering, Korea University, Seoul, Republic of Korea zArtificial Intelligence Engineering Research Center, College of Engineering, Korea University, Seoul, Republic of Korea E-mails: [email protected], [email protected] Abstract—This paper proposes a method of extracting an Algorithm 1: ICP-Registration RGB-D image using Azure Kinect, a depth camera, creating a 1: Input: The set of RGB-D nodes fragment, i.e., 6D images (RGBXYZ), using Open3D, creating it as a point cloud object, and implementing webVR using A = fa1; : : : ; am; : : : ; aM g, N : the number indicating three.js. Furthermore, it presents limitations and potentials for how many node pairs to divide. development. 2: Output: The set of point clouds P = fp1; : : : ; pk; : : : ; pK g I. INTRODUCTION s.t. M = NK_ where M; K and N 2 N 1 Recently,the mobile device with LiDAR sensor, e.g. Ipad 3: Initialize: P fg, T = fg Pro4, has been released and Apple has announced that it will 4: Definition: s: source node index, t: target node index s.t. release Apple Glass. After the VR industry emerged, it has 1 ≤ s < t < s + N ≤ M been receiving a lot of attention from the academia and the public about virtual environments such as VR, AR, and MR 5: for s in range M: around the world [1], [2]. 6: as : source node Bringing the real world into virtual reality is another area 7: for t in range [s; min(s + N; M)): of VR.
    [Show full text]
  • Citizenship Report
    Citizenship Report 2013 Citizenship at Microsoft Our Company Serving Communities Working Responsibly About This Report Back to Contents Contents Citizenship at Microsoft Serving Communities Working Responsibly 5 Working responsibly 15 Microsoft YouthSpark 47 Our people 6 Serving communities 17 Empowering youth through 48 Compensation and benefits 6 Citizenship governance technology and education 51 Diversity and inclusion 7 Setting priorities and 19 Inspiring future innovators 52 Training and development stakeholder engagement 21 Helping youth realize their potential 52 Health and safety 7 External frameworks 24 Technology for Good 56 Environmental sustainability 25 Donating software and services 57 Making our operations carbon to nonprofits worldwide neutral Our Company 26 Improving access to hardware 64 Developing technologies to 28 Sharing knowledge to deepen reduce environmental impact impact 9 Our business 68 Human rights 30 Solutions in action 9 Where we are 70 The power of technology 9 Engaging our customers 33 Employee giving to promote human rights and partners 35 Making a difference 74 A global approach 11 Our products 75 Engagement 39 Humanitarian response 12 Investing in innovation 75 Good governance and 41 Accessibility the rule of law 42 Expanding opportunity for 77 Privacy and data security people with disabilities 80 Responsible sourcing 45 Improving wellness for seniors 82 Hardware and packaging production 91 Conflict minerals 93 Governance 94 Corporate governance 96 Maintaining strong practices and performance 97 Public policy engagement 99 Compliance About This Report Cover: Fashion entrepreneur Johnnie Lovett credits much of his success to Network for 101 Reporting year Teaching Entrepreneurship (NFTE), a Microsoft- 101 Scope funded high school program that inspires youth 102 Additional reporting from low-income areas to stay in school, identify 102 Feedback career opportunities, and focus on their future.
    [Show full text]