Stereo Calibration of Depth Sensors with 3D

Total Page:16

File Type:pdf, Size:1020Kb

Stereo Calibration of Depth Sensors with 3D STEREO CALIBRATION OF DEPTH SENSORS WITH 3D CORRESPONDENCES AND SMOOTH INTERPOLANTS A Thesis Presented to the Faculty of the Department of Computer Science University of Houston In Partial Fulfillment of the Requirements for the Degree Master of Science By Chrysanthi Chaleva Ntina May 2013 STEREO CALIBRATION OF DEPTH SENSORS WITH 3D CORRESPONDENCES AND SMOOTH INTERPOLANTS Chrysanthi Chaleva Ntina APPROVED: Dr. Zhigang Deng, Chairman Dept. of Computer Science Dr. Guoning Chen Dept. of Computer Science Dr. Mina Dawood Dept. of Civil and Environmental Engineering Dean, College of Natural Sciences and Mathematics ii iii STEREO CALIBRATION OF DEPTH SENSORS WITH 3D CORRESPONDENCES AND SMOOTH INTERPOLANTS An Abstract of a Thesis Presented to the Faculty of the Department of Computer Science University of Houston In Partial Fulfillment of the Requirements for the Degree Master of Science By Chrysanthi Chaleva Ntina May 2013 iv Abstract The Microsoft Kinect is a novel sensor that besides color images, also returns the actual distance of a captured scene from the camera. Its depth sensing capabilities, along with its affordable, commercial-type availability led to its quick adaptation for research and applications in Computer Vision and Graphics. Recently, multi-Kinect systems are being introduced in order to tackle problems like body scanning, scene reconstruction, and object detection. Multiple-cameras configurations however, must first be calibrated on a common coordinate system, i.e. the relative position of each camera needs to be estimated with respect to a global origin. Up to now, this has been addressed by applying well-established calibration methods, developed for con- ventional cameras. Such approaches do not take advantage of the additional depth information, and disregard the quantization error model introduced by the depth resolution specifications of the sensor. We propose a novel algorithm for calibrat- ing a pair of depth sensors, based on a recovered affine transformation from very few 3D point correspondences, refined under a non-rigid registration, that accounts for the non-linear sensor acquisition error. The result is a closed form mapping, of the smooth warping type, that compensates for pairwise calibration point differ- ences. The formulation is further complemented by proposing two different ways of efficiently capturing candidate calibration points. Qualitative 3D registration re- sults show significant improvement over the conventional rigid calibration method, and highlight the potential for advanced and more accurate multi-sensor configura- tions. v Contents 1 Introduction 1 1.1 Motivation . .1 1.2 Purpose and Aim . .4 1.3 Challenges . .5 1.4 Contributions . .6 1.5 Outline . .6 2 Related Work 8 2.1 Multi-Kinect Calibration Approaches . .9 2.1.1 Conventional Pairwise Stereo Calibration . .9 2.1.2 Global Calibration and Refinements . 11 2.1.3 Other Methods . 13 2.1.4 Limitations of Existing Approaches . 14 2.2 Capturing Calibration Points . 15 3 The Microsoft Kinect 18 3.1 Sensor Description . 18 3.1.1 Kinect for XBOX vs Kinect for Windows . 19 3.1.2 Hardware Components . 19 3.1.3 Software Drivers . 21 vi 3.2 Sensor Calibration . 23 3.2.1 Color Camera Intrinsics . 23 3.2.2 Infrared Camera Intrinsics . 25 3.2.3 Color-IR Stereo Calibration . 26 3.3 The Depth Capturing System . 30 3.3.1 Acquiring Depth Data . 30 3.3.2 The Depth Error Model . 33 4 Non-rigid Calibration 37 4.1 Initial Stereo Calibration . 39 4.2 Insufficiency of the Rigid Assumption . 42 4.3 Non-rigid Correction . 45 4.3.1 Thin Plate Splines . 45 4.3.2 Approximating the Mapping Function with RBFs . 48 5 Capturing Calibration Points 53 5.1 Using the Infrared Image . 53 5.1.1 Obtaining 2D Points . 54 5.1.2 Transforming Points to 3D . 55 5.1.3 Using RGB Instead of IR Camera . 59 5.2 Using the Depth Map Directly . 60 5.2.1 Background Removal . 61 5.2.2 RANSAC-based Model Fitting . 61 5.2.3 MLS Resampling . 63 5.2.4 Center Extraction . 64 6 Reconstruction Experiments and Calibration Evaluation 65 6.1 Experimental Setup . 65 vii 6.1.1 Kinect Network . 66 6.1.2 Multiple Kinects Interference . 67 6.2 Preprocessing for Data Registration . 70 6.2.1 Background Removal . 71 6.2.2 Converting Depth to Point Clouds . 73 6.2.3 Coloring the Point Clouds . 74 6.3 Registration Results and Comparison . 75 6.3.1 Data, Methods, Setups, Visuals . 77 6.3.2 Visual Registration Comparisons . 78 6.3.3 Qualitative and Comparative Analysis . 80 7 Conclusion 88 7.1 Future Work . 89 Bibliography 93 viii List of Figures 2.1 Three Kinects setup (Berger et al. [7]) . 10 2.2 Three Kinects setup for scanning the human body (Tong et al. [69]) . 12 2.3 Four Kinects setup and calibration object (Alexiadis et al. [5]) . 13 2.4 Calibration objects in RGB, depth and IR (Berger et al. [8]) . 17 3.1 The Microsoft Kinect device1....................... 19 3.2 Valid depth values2............................. 20 3.3 Microsoft Kinect components [54]. 21 3.4 RGB and corresponding IR frames used for intrinsics calibration. 24 3.5 Uncalibrated color frame mapped to corresponding depth. 27 3.6 RGB-IR calibration interface. 30 3.7 Actual depth distance measured by the Kinect sensor. 31 3.8 The IR speckled pattern emitted by the laser projector. 32 3.9 The triangulation process for depth from disparity. 33 4.1 Local and global coordinate systems for two sensors . 39 4.2 Error during depth capturing. 42 4.3 Error difference in calibration points captured by two sensors. 44 4.4 Thin Plate Splines interpolation (from [22]). 46 4.5 Calibration steps. 51 ix 5.1 Interface to capture infrared (top) and depth (bottom) images for two Kinects. 55 5.2 Example setup with infrared and depth images captured. 56 5.3 Detected 2D points mapped to 3D. 56 5.4 Geometry of the pinhole camera model. 57 5.5 Detected checkerboard corners converted to 3D point cloud . 59 5.6 Ball quantization step. 62 5.7 Moving Least Squares to upsample sphere. 63 5.8 Calibration points using the depth map . 64 6.1 Server interface with two connected Kinect clients. 67 6.2 Interference depending on relative sensor position. 69 6.3 Dot pattern interference with and without enforced motion blur. 70 6.4 Frames for building a background depth model. 72 6.5 Background subtraction steps. 73 6.6 Depth map converted to point cloud. 74 6.7 Coloring of point cloud through mapping of the RGB frame. 76 6.8 Uncalibrated point clouds. 78 6.9 (a) Conventional and (b) our registration results. 79 6.10 Uncalibrated point clouds. 80 6.11 (a) Conventional and (b) our registration results. 81 6.12 Colored registered point clouds using (a) conventional calibration and (b) our method. 82 6.13 Variance of the proposed method . 82 6.14 Registration using our method for different poses (a) and (b). 83 6.15 Registration using our method for different scenes (a) and (b). 84 6.16 Registration results of (a) conventional and (b) our method in cloud with very little overlap. 85 x 6.17 Registration results of (a) conventional and (b) our method in clouds with almost no overlap. 86 xi List of Tables 3.1 Comparison of available Kinect drivers. 22 3.2 Intrinsic parameters for RGB camera. 25 3.3 Distortion coefficients for RGB camera. 25 3.4 Intrinsic parameters for IR camera. 26 3.5 Distortion coefficients for IR camera. 26 6.1 Color-coded point clouds . 78 xii Chapter 1 Introduction In this chapter we state the research topic and motivation of this thesis, and provide an outline of the contents. 1.1 Motivation The release of the Microsoft Kinect sensor in 2010 has given a new boost to Computer Vision and Graphics research and applications. With its depth sensing capabilities, a new source of scene data, and its affordable, commercial-type avail- ability, it provides a fast and convenient way of enhancing camera setups. The Kinect is a novel sensing device, that apart from being a conventional RGB camera, it also incorporates a depth sensor that can provide the depth value of each frame pixel in the form of distance from the camera in mm units. In the relatively few years that have elapsed since its release, a large body of work has emerged in the literature in 1 diverse fields, related to using, configuring or expanding the new framework. At an algorithmic level, Kinect data have been used with conventional image processing methods for human detection [77], person tracking [45], [37], and object detection [65]. Graphics applications include face [82], [32], body [20], [18], [69], [74], and shape [21], [19] scanning, as well as markerless motion capture [8] and scene reconstruction [35], [75]. A range of additional fields have also incorporated the versatility of the Kinect data for solving traditional or newly emerging prob- lems, besides the originally intended gaming and motion-based interfaces. Examples include, Robotics, Human-Computer Interaction (HCI), Animation, Smart(er) In- terfaces, and more. Applications in HCI range from new interactive designs [58], [55], [81], [42], [58], [76] to immersive and virtual reality [72]. In robotics, the sensor's portability and high frame rate [61] has been employed, e.g. by mounting Kinects on mobile robots for visual-based navigation, obstacle avoidance [52], [9], and indoor scene modeling [31], [70]. Moreover, specialized application domains have emerged by incorporating the use of Kinect and replacing traditional cameras. For example, in healthcare applications, Kinect sensors have been used for patient rehabilitation [44], [16]. The above being indicative applications, the diversity of the domains is dictated by the wide range of problems involving shape, scene, and motion measurements in the three-dimensional space.
Recommended publications
  • NASA Develops Wireless Tile Scanner for Space Shuttle Inspection
    August 2007 NASA, Microsoft launch collaboration with immersive photography NASA and Microsoft Corporation a more global view of the launch facil- provided us with some outstanding of Redmond, Wash., have released an ity. The software uses photographs images, and the result is an experience interactive, 3-D photographic collec- from standard digital cameras to that will wow anyone wanting to get a tion of the space shuttle Endeavour construct a 3-D view that can be navi- closer look at NASA’s missions.” preparing for its upcoming mission to gated and explored online. The NASA The NASA collections were created the International Space Station. Endea- images can be viewed at Microsoft’s in collaboration between Microsoft’s vour launched from NASA Kennedy Live Labs at: http://labs.live.com Live Lab, Kennedy and NASA Ames. Space Center in Florida on Aug. 8. “This collaboration with Microsoft “We see potential to use Photo- For the first time, people around gives the public a new way to explore synth for a variety of future mission the world can view hundreds of high and participate in America’s space activities, from inspecting the Interna- resolution photographs of Endeav- program,” said William Gerstenmaier, tional Space Station and the Hubble our, Launch Pad 39A and the Vehicle NASA’s associate administrator for Space Telescope to viewing landing Assembly Building at Kennedy in Space Operations, Washington. “We’re sites on the moon and Mars,” said a unique 3-D viewer. NASA and also looking into using this new tech- Chris C. Kemp, director of Strategic Microsoft’s Live Labs team developed nology to support future missions.” Business Development at Ames.
    [Show full text]
  • Applications of Reinforcement Learning to Routing and Virtualization in Computer Networks
    Applications of Reinforcement Learning to Routing and Virtualization in Computer Networks by Soroush Haeri B. Eng., Multimedia University, Malaysia, 2010 Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in the School of Engineering Science Faculty of Applied Science © Soroush Haeri 2016 SIMON FRASER UNIVERSITY Spring 2016 All rights reserved. However, in accordance with the Copyright Act of Canada, this work may be reproduced without authorization under the conditions for “Fair Dealing.” Therefore, limited reproduction of this work for the purposes of private study, research, criticism, review and news reporting is likely to be in accordance with the law, particularly if cited appropriately. Abstract Computer networks and reinforcement learning algorithms have substantially advanced over the past decade. The Internet is a complex collection of inter-connected networks with a numerous of inter-operable technologies and protocols. Current trend to decouple the network intelligence from the network devices enabled by Software-Defined Networking (SDN) provides a centralized implementation of network intelligence. This offers great computational power and memory to network logic processing units where the network intelligence is implemented. Hence, reinforcement learning algorithms viable options for addressing a variety of computer networking challenges. In this dissertation, we propose two applications of reinforcement learning algorithms in computer networks. We first investigate the applications of reinforcement learning for deflection routing in buffer- less networks. Deflection routing is employed to ameliorate packet loss caused by contention in buffer-less architectures such as optical burst-switched (OBS) networks. We present a framework that introduces intelligence to deflection routing (iDef).
    [Show full text]
  • Modernization of Digital Enterprises Ai at the Core
    MODERNIZATION OF DIGITAL ENTERPRISES AI AT THE CORE From Models to Outcomes Hardik Tiwari, Prateek Das Humankind has always been fascinated by the ability of machines to learn Movies, Books, Art Thomas Bayes conceptualized Bayes Mathematicians and Scientists envisioned Theorem in 1763 the possibilities of predicting outcomes Alan Turing Arthur Samuel The world started imagining what if machines are Coined the term Wrote the first smarter “Turing Test” in Machine Learning 1950 code in 1952 And now we live in a present in which humans and intelligent systems are bound together in a symbiotic autonomy Smart Reply Apr 1, 2009: An April Fool’s Day joke Nov 5, 2015: Launched real product Feb 1, 2016: >10% of mobile Inbox replies AI permeates our daily lives — from search engines to ride-share schedulers to ever needful digital personal assistants Received a reminder about Confluence from Google Checked route on maps Booked a cab on Uber Received a location update for Taj Taking notes on Evernote Took a selfie for Instagram AI has reached a stage where intelligent systems have bettered the humans at times Face Recognition Human AI/ Machine 97.5% 97.7% Lip reading 41.3% 57.9% Pneumonia Detection 75.3% 75.9% And now AI technologies have become pervasive in every industry BFSI $25B Estimated revenue from AI products & Top Brands services in 2025 Fraud Detection Automated Cognitive RPA Uses automated analysis to help Trading identify clients best positioned for follow-on equity offerings. Chatbots Robo- Portfolio Advisors ~5M Management Potential jobs to be Loan/ Risk impacted in US by Insurance Management 2025 Credit underwriting Scoring Personalized Added AI enhancements to its Financial mobile banking app, which will give Scaleof disruption Products users personalized insights into ~10,000 their finances.
    [Show full text]
  • Patents and Standards : a Modern Framework for IPR-Based Standardisation
    Patents and standards : a modern framework for IPR-based standardisation Citation for published version (APA): Bekkers, R. N. A., Birkman, L., Canoy, M. S., De Bas, P., Lemstra, W., Ménière, Y., Sainz, I., Gorp, van, N., Voogt, B., Zeldenrust, R., Nomaler, Z. O., Baron, J., Pohlmann, T., Martinelli, A., Smits, J. M., & Verbeek, A. (2014). Patents and standards : a modern framework for IPR-based standardisation. European Commission. https://doi.org/10.2769/90861 DOI: 10.2769/90861 Document status and date: Published: 01/01/2014 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
    [Show full text]
  • SMB Group's 2017 Top 10 SMB Technology Trends
    Authored by SMB Group’s 2017 Top 10 SMB Technology Trends Sponsored by 2017 has the potential to bring unprecedented changes to the technology landscape for SMBs. In most years, the top tech trends tend to develop in an evolutionary way, but this year we also will see some more dramatic shifts that SMBs need to put on their radar. Areas such as cloud and mobile continue to evolve in important ways, and they are also paving the way for newer trends in areas including artificial intelligence (AI) and machine learning, integration and the Internet of Things (IoT) to take hold among SMBs. Although we can’t cover all of them in our Top 10 list, here’s our take on the trends that hold the most promise for SMBs in 2017. (Note: SMB Group is the source for all research data quoted unless otherwise noted.) 1. The Cloud Continues to Power SMB Digital Transformation. 2. IoT Moves from Hype to Reality for Early-Adopter SMBs. 3. The Rise of Smart Apps for SMBs. 4. Focused, Tailored CRM Solutions Take Hold Within SMBs. 5. SMBs Get Connected with New Collaboration Tools. 6. SMBs Modernize On-Premises IT with Hyper-Converged Infrastructure. 7. Application Integration Gets Easier for Small Businesses. 8. SMB Mobile Momentum Continues, but Mobile Management Lags. 9. Online Financing Options for Small Businesses Multiply. 10. Proactive SMBs Turn to MSSPs and Cyber Insurance to Face Security Challenges. 1. The Cloud Continues to Power SMB Digital Transformation. Most SMBs understand that they need to put technology to work to transform their businesses for the future: 72% of SMB decision makers say that technology solutions can help them significantly improve business outcomes and/or run the business better, and 53% plan to increase technology investments.
    [Show full text]
  • 3D Visualization and Interactive Image Manipulation for Surgical Planning in Robot-Assisted Surgery
    3D Visualization and Interactive Image Manipulation for Surgical Planning in Robot-assisted Surgery A Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy By Mohammadreza Maddah B.S. University of Tehran, Iran, 1999 M.S. Semnan University, 2011 2018 Wright State University Wright State University Graduate School April 27, 2018 I HEREBY RECOMMEND THAT THE DISSERTATION PREPARED UNDER MY SUPERVISION BY Mohammadreza Maddah ENTITLED 3D Visualization and Interactive Image Manipulation for Surgical Planning in Robot-assisted Surgery BE ACCEPTED IN PARTIAL FULLFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy. Caroline. G.L. Cao, Ph.D. Dissertation Director Frank W. Ciarallo, Ph.D. Director, Ph.D. in Engineering Program Barry Milligan, Ph.D. Professor and Interim Dean of the Graduate School Committee on Final Examination Caroline. G.L. Cao, Ph.D. Thomas Wischgoll, Ph.D. Zach Fuchs, Ph.D. Xinhui Zhang, Ph.D. Cedric Dumas, Ph.D. ABSTRACT Maddah, Mohammadreza. Ph.D., Engineering Ph.D. Program, Department of Biomedical, Industrial, and Human Factors Engineering, Wright State University, 2018. 3D visualization and interactive image manipulation for surgical planning in robot-assisted surgery. Robot-assisted surgery, or “robotic” surgery, has been developed to address the difficulties with the traditional laparoscopic surgery. The da Vinci (Intuitive Surgical, CA and USA) is one of the FDA-approved surgical robotic system which is widely used in the case of abdominal surgeries like hysterectomy and cholecystectomy. The technology includes a system of master and slave tele-manipulators that enhances manipulation precision. However, inadequate guidelines and lack of a human-machine interface system for planning the ports on the abdomen surface are some of the main issues with robotic surgery.
    [Show full text]
  • A Developer's Guide to Building AI Applications
    A Developer’s Guide to Building AI Applications Create Your First Intelligent Bot with Microsoft AI Anand Raman and Wee Hyong Tok Beijing Boston Farnham Sebastopol Tokyo A Developer’s Guide to Building AI Applications by Anand Raman and Wee Hyong Tok Copyright © 2018 O’Reilly Media, Inc. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online edi‐ tions are also available for most titles (http://oreilly.com/safari). For more information, contact our corporate/institutional sales department: 800-998-9938 or [email protected]. Editor: Nicole Tache Interior Designer: David Futato Production Editor: Nicholas Adams Cover Designer: Karen Montgomery Copyeditor: Octal Publishing, Inc. Illustrator: Rebecca Demarest May 2018: First Edition Revision History for the First Edition 2018-05-24: First Release The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. A Developer’s Guide to Building AI Applications, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsi‐ bility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.
    [Show full text]
  • A Framework for Vector-Weighted Deep Neural Networks
    UNLV Theses, Dissertations, Professional Papers, and Capstones 5-1-2020 A Framework for Vector-Weighted Deep Neural Networks Carter Chiu Follow this and additional works at: https://digitalscholarship.unlv.edu/thesesdissertations Part of the Artificial Intelligence and Robotics Commons, and the Computer Engineering Commons Repository Citation Chiu, Carter, "A Framework for Vector-Weighted Deep Neural Networks" (2020). UNLV Theses, Dissertations, Professional Papers, and Capstones. 3876. http://dx.doi.org/10.34917/19412042 This Dissertation is protected by copyright and/or related rights. It has been brought to you by Digital Scholarship@UNLV with permission from the rights-holder(s). You are free to use this Dissertation in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/or on the work itself. This Dissertation has been accepted for inclusion in UNLV Theses, Dissertations, Professional Papers, and Capstones by an authorized administrator of Digital Scholarship@UNLV. For more information, please contact [email protected]. A FRAMEWORK FOR VECTOR-WEIGHTED DEEP NEURAL NETWORKS By Carter Chiu Bachelor of Science { Computer Science University of Nevada, Las Vegas 2016 A dissertation submitted in partial fulfillment of the requirements for the Doctor of Philosophy { Computer Science Department of Computer Science Howard R. Hughes College of Engineering The Graduate College University of Nevada, Las Vegas May 2020 c Carter Chiu, 2020 All Rights Reserved Dissertation Approval The Graduate College The University of Nevada, Las Vegas April 16, 2020 This dissertation prepared by Carter Chiu entitled A Framework for Vector-Weighted Deep Neural Networks is approved in partial fulfillment of the requirements for the degree of Doctor of Philosophy – Computer Science Department of Computer Science Justin Zhan, Ph.D.
    [Show full text]
  • Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by PubMed Central Sensors 2012, 12, 1437-1454; doi:10.3390/s120201437 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications Kourosh Khoshelham * and Sander Oude Elberink Faculty of Geo-Information Science and Earth Observation, University of Twente, P.O. Box 217, Enschede 7514 AE, The Netherlands; E-Mail: [email protected] * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +31-53-487-4477; Fax: +31-53-487-4335. Received: 14 December 2011; in revised form: 6 January 2012 / Accepted: 31 January 2012 / Published: 1 February 2012 Abstract: Consumer-grade range cameras such as the Kinect sensor have the potential to be used in mapping applications where accuracy requirements are less strict. To realize this potential insight into the geometric quality of the data acquired by the sensor is essential. In this paper we discuss the calibration of the Kinect sensor, and provide an analysis of the accuracy and resolution of its depth data. Based on a mathematical model of depth measurement from disparity a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimeters up to about 4 cm at the maximum range of the sensor. The quality of the data is also found to be influenced by the low resolution of the depth measurements.
    [Show full text]
  • ML Cheatsheet Documentation
    ML Cheatsheet Documentation Team июн. 12, 2018 Основы 1 Линейная регрессия 3 2 Gradient Descent 19 3 Logistic Regression 23 4 Glossary 37 5 Calculus 41 6 Linear Algebra 53 7 Probability 63 8 Statistics 65 9 Notation 67 10 Основы 71 11 Прямое распространение 77 12 Backpropagation 87 13 Activation Functions 93 14 Layers 99 15 Loss Functions 103 16 Optimizers 107 17 Regularization 111 18 Architectures 113 19 Classification Algorithms 123 20 Clustering Algorithms 125 i 21 Regression Algorithms 127 22 Reinforcement Learning 129 23 Datasets 131 24 Libraries 147 25 Papers 177 26 Other Content 183 27 Contribute 189 ii ML Cheatsheet Documentation Краткое и доступное объяснение концепций машинного обучения с диаграммами, примерами кода и ссылками на внешние ресурсы для самостоятельного углубленного изучения. Предупреждение: Данное руководство находится в стадии ранней разреботки. Оно является вольным переводом англоязычного руководства Machine Learning Cheatsheet. Если вы нашли ошиб- ку, пожалуйста поднимите вопрос или сделайте свой вклад для улучшения руководства! Основы 1 ML Cheatsheet Documentation 2 Основы Глава 1 Линейная регрессия • Введение • Простая регрессия – Предсказывание – Cost function – Gradient descent – Training – Model evaluation – Summary • Multivariable regression – Growing complexity – Normalization – Making predictions – Initialize weights – Cost function – Gradient descent – Simplifying with matrices – Bias term – Model evaluation 3 ML Cheatsheet Documentation 1.1 Введение Линейная регрессия это алгоритм машинного обучения
    [Show full text]
  • Maliciously Secure Matrix Multiplication with Applications to Private Deep Learning?
    Maliciously Secure Matrix Multiplication with Applications to Private Deep Learning? Hao Chen1, Miran Kim2, Ilya Razenshteyn3, Dragos Rotaru4;5, Yongsoo Song3, and Sameer Wagh6;7 1 Facebook 2 Ulsan National Institute of Science and Technology 3 Microsoft Research, Redmond 4 imec-COSIC, KU Leuven, Belgium 5 Cape Privacy 6 Princeton University, NJ 7 University of California, Berkeley [email protected], [email protected], filyaraz,[email protected], [email protected], [email protected] Abstract. Computing on data in a manner that preserve the privacy is of growing importance. Multi-Party Computation (MPC) and Homo- morphic Encryption (HE) are two cryptographic techniques for privacy- preserving computations. In this work, we have developed efficient UC- secure multiparty protocols for matrix multiplications and two-dimensional convolutions. We built upon the SPDZ framework and integrated the state-of-the-art HE algorithms for matrix multiplication. Our protocol achieved communication cost linear only in the input and output dimen- sions and not on the number of multiplication operations. We eliminate the \triple sacrifice” step of SPDZ to improve efficiency and simplify the zero-knowledge proofs. We implemented our protocols and benchmarked them against the SPDZ LowGear variant (Keller et al. Eurocrypt'18). For multiplying two square matrices of size 128, we reduced the commu- nication cost from 1.54 GB to 12.46 MB, an improvement of over two orders of magnitude that only improves with larger matrix sizes. For evaluating all convolution layers of the ResNet-50 neural network, the communication reduces cost from 5 TB to 41 GB. Keywords: Multi-party computation · Dishonest majority · Homomor- phic encryption 1 Introduction Secure Multiparty Computation (MPC) allows a set of parties to compute over their inputs while keeping them private.
    [Show full text]
  • City of Opa-Locka City Commission Chambers 215 Perviz Avenue Opa-Locka, FL 33054
    City of Opa-locka City Commission Chambers 215 Perviz Avenue Opa-locka, FL 33054 Virtual Regular Commission Meeting Agenda Wednesday, May 13, 2020 7:00 PM City Commission Mayor Matthew A. Pigatt Vice Mayor Chris Davis Commissioner Sherelean Bass Commissioner Alvin Burke Commissioner Joseph L. Kelley Appointed Officials City Manager John E. Pate City Attorney Burnadette Norris-Weeks City Clerk Joanna Flores, CMC 1 City Commission Meeting Agenda May 13, 2020 SPEAKING BEFORE THE CITY COMMISSION NOTE: All persons speaking shall come forward and give your full name and address, and the name and address of the organization you are representing. There is a three (3) minute time limit for speaker/citizens forum and participation at all city commission meetings and public hearings. Your cooperation is appreciated in observing the three (3) minute time limit policy. If your matter requires more than three (3) minutes, please arrange a meeting or an appointment with the City Clerk prior to the commission meeting. City of Opa-locka Code of Ordinances Section 2-57 DECORUM POLICY Any person making impertinent or slanderous remarks or who become boisterous while addressing the commission, shall be declared to be out of order by the presiding officer, and shall be barred from further audience before the Commission by the presiding officer, unless permission to continue or again address the commission be granted by the majority vote of the commission members. City of Opa-locka Code of Ordinances Section 2-58 NOTICE TO ALL LOBBYISTS Any person appearing in a paid or remunerated representative capacity before the city staff, boards, committees and the City Commission is required to register with the City Clerk before engaging in lobbying activities.
    [Show full text]