GPU 学習時間 精度(Map) 1 GPU (NC6) 7分22秒 0.9479 2 GPU (NC12) 3分43秒 0.9479 本番稼働に向けて Confusion Matrix for Karugamo

Total Page:16

File Type:pdf, Size:1020Kb

GPU 学習時間 精度(Map) 1 GPU (NC6) 7分22秒 0.9479 2 GPU (NC12) 3分43秒 0.9479 本番稼働に向けて Confusion Matrix for Karugamo CNTK deep dive - DeepLearning関連 PJ の進め方から本番展開まで AI07 Agenda Profile 岩崎 喬一(Kyoichi Iwasaki) Today’s data データサイエンティスト、やっぱり要る? データサイエンティストのスキルセット 課題背景を理解 →ビジネス課題を整理 ビジネス力 解決 情報処理、人工知能、 統計学などの知恵を理 解し、適用 データサイエ データエンジ データサイエンスを意味 ある形に使えるようにし、 ンス力 ニアリング力 実装、運用 Ref. データサイエンティスト協会:http://www.datascientist.or.jp/news/2014/pdf/1210.pdf データ分析プロジェクトの進め方 ビジネス 要件定義 ビジネス力 データ収 展開 集・確認 データサイエ データエンジ ンス力 ニアリング力 データ分 評価 析 機械学習と深層学習 深層学習? 深層学習による主な画像解析(as of May2018) What are specified? Algorithms MSでの実装 単純 画像分類 What? CNN Custom Vision, CNTK Fast(er) Where? Custom Vision, CNTK 物体検知 What? R-CNN Mask Where? Shape? ..(In near future?) セグメンテーション What? 複雑 R-CNN 深層学習による主な画像解析 深層学習による主な画像解析(as of May2018) 2015 2015-16 2015-16 2015-16 2017 • Fast R-CNN • Faster R-CNN • YOLO • SSD • Mask R-CNN 物体検知 セグメンテーション 深層学習の「学習」? 深層学習(機械学習の観点から) CNTKとは? CNTKとは? ▪ GPU / マルチGPU(1-bit SGD) https://www.microsoft.com/en-us/cognitive-toolkit/ CNTKの実行速度 小さいほど高速 DL F/W FCN-S AlexNet ResNet-50 LSTM-64 CNTK 0.017 0.031 0.168 0.017 Caffe 0.017 0.027 0.254 -- TensorFlow 0.020 0.317 0.227 0.065 Torch 0.016 0.043 0.144 0.324 https://arxiv.org/pdf/1608.07249.pdf Codes in CNTK https://github.com/Microsoft/CNTK CNTKで2値分類をやってみる 赤 青 CNTKで2値分類をやってみる パラメータw、b w、b bias 푏 1 疾患 年齢 푤 11 z1 あり 푤21 푏2 푤12 疾患 腫瘍 푤 z2 22 なし CNTKの処理フロー 入力・出力変数の定義 ネットワークの定義 損失関数、最適化方法の定義 モデル学習 モデル評価 CNTKの処理フロー – 1/5 入力・出力変数の定義 import cntk as C ## 入力変数(年齢, 腫瘍の大きさ)の2種類あり input_dim = 2 ## 分類数(疾患の有無なので2値) num_output_classes = 2 ## 入力変数 feature = C.input_variable(input_dim, np.float32) ## 出力変数 label = C.input_variable(num_output_classes, np.float32) CNTKの処理フロー – 2/5 ネットワークの定義 def linear_layer(input_var, output_dim): input_dim = input_var.shape[0] ## Define weight W weight_param = C.parameter(shape=(input_dim, output_dim)) ## Define bias b bias_param = C.parameter(shape=(output_dim)) ## Wx + b. Pay attention to the order of variables!! return bias_param + C.times(input_var, weight_param) z = linear_layer(input, num_outputs) CNTKの処理フロー – 3/5 損失関数、最適化方法の定義 ## 損失関数 loss = C.cross_entropy_with_softmax(z, label) ## 分類エラー("分類として"当たっているか否か) eval_error = C.classification_error(z, label) ## 最適化 learner = C.sgd(z.parameters, lr_schedule) trainer = C.Trainer(z, (loss, eval_error), [learner]) CNTKの処理フロー – 4/5 モデル学習 for i in range(0, num_minibatches_to_train): ## Extract training data features, labels = generate_random_data_sample(minibatch_size, input_dim, num_output_classes) ## Train trainer.train_minibatch({feature : features, label : labels}) CNTKの処理フロー – 5/5 モデル評価 out = C.softmax(z) result = out.eval({feature : features}) Demo: CNTK basic process Tips :: Jupyter Notebookの起動 > activate py35 (py35) > jupyter notebook https://notebooks.azure.com/ Demo :: training - evaluate https://youtu.be.com/70FMOdVUNPI Tips 混同行列(Confusion Matrix) 疾患[実際] 非疾患[実際] 疾患[予測] 14 0 非疾患[予測] 2 9 非疾患と予測(緑線の下) 実際は疾患 (緑線の下の赤2つ) ▶モデルの見逃し 物体検知からの分散学習 物体検知って?(recap) What are specified? Algorithms MSでの実装 単純 画像分類 What? CNN Custom Vision, CNTK Fast(er) Where? Custom Vision, CNTK 物体検知 What? R-CNN Mask Where? Shape? ..(In near future?) セグメンテーション What? 複雑 R-CNN 深層学習による主な画像解析 VoTT https://github.com/Microsoft/VoTT 静止画 動画 物体検知における“座標” x., y.ともに、左上の●から計った両端点● xmin xmax ymin ymax Quick demo on VoTT(静止画) Quick demo for VoTT https://youtu.be/j7OjxFY2Go8 Quick demo on VoTT(動画) Quick demo for VoTT https://youtu.be/RCuGLRtggrI Karugamo detection! Movie inferred with Faster R-CNN https://youtu.be/ziuEXyJ9SNs CNTKでの分散学習 – 1/6 深層学習での分散学習とは? →主に学習時間短縮のためにマルチGPUを利用すること 分散学習実行準備 マルチGPU搭載 マルチホスト 1bit-SGD CNTKでの分散学習 – 2/6 入力・出力変数の定義 ネットワークの定義 損失関数、最適化方法の定義 モデル学習 モデル評価 CNTKでの分散学習 – 3/6 損失関数の定義 from cntk import distributed ... learner = cntk.learner.momentum_sgd(...) # create local learner distributed_after = epoch_size # number of samples to warm start with distributed_learner = distributed.data_parallel_distributed_learner( learner = learner, num_quantization_bits = 32, # non-quantized gradient accumulation distributed_after = 0) # no warm start CNTKでの分散学習 – 4/6 最適化方法の定義 minibatch_source = MinibatchSource(...) ... trainer = Trainer(z, ce, pe, distributed_learner) ... session = training_session(trainer=trainer, mb_source=minibatch_source, ...) session.train() ... distributed.Communicator.finalize() # must be called to finalize MPI in case of successful distributed training https://docs.microsoft.com/en-us/cognitive-toolkit/multiple-gpus-and- machines#2-configuring-parallel-training-in-cntk-in-python CNTKでの分散学習 – 5/6 分散学習の実行方法 # GPUを2つ利用し、学習用スクリプトがtraining.py > mpiexec –n 2 python training.py CNTKでの分散学習 – 6/6 CPU/GPU利用設定 import cntk ## CPU利用時 cntk.device.try_set_default_device(cntk.device.cpu()) ## GPU利用時 cntk.device.try_set_default_device(cntk.device.gpu()) Some preparation for Deep Learning! CNTKから2つのGPUが見えている GPU利用可能か確認 https://github.com/kyoro1/decode2018/blob/master/2.%20distributed%20learning%2 0with%20Faster%20R-CNN.ipynb Faster R-CNN with single GPU モデルパラメータ数は5700万超! Single GPU vs Multi GPU 2つのGPU利用 Single GPU vs Multi GPU GPU 学習時間 精度(mAP) 1 GPU (NC6) 7分22秒 0.9479 2 GPU (NC12) 3分43秒 0.9479 本番稼働に向けて Confusion Matrix for karugamo karugamoが写っているのに、 モデルは推定できなかった ▶モデルの見逃し あり[実際] なし[実際] あり[予測] XX XX なし[予測] XX XX Confusion Matrix for karugamo あり[実際] なし[実際] あり[予測] XX XX なし[予測] XX XX karugamoでないものに、 karugamoと推定 ▶モデルの過検知? Original movie vs inferred movie https://youtu.be/SYTw5OLBnzc データサイエンス活用推進体制(案) 【事業活動に結びつけるための 【分析結果を活用し、ビジネス企 戦略づくり・設計を行う人材】 画や改善を活かす人材】 ビジネス力 【実際に手を動かして #AI02 データサイエ データエンジ ンス力 ニアリング力 データ分析をする人材】 http://www.kantei.go.jp/jp/singi/keizaisaisei/miraitoshikaigi/jinzaiikusei_dai3/siryou4.pdf 本番展開あるある (技術寄り編) 本番展開あるある (システム運用編) Key takeaway • 機械学習 ほぼ同等 本来の意味 ビジネスサイドとの協業 • どんなKPI Key takeaway 平易にスクリプトが書け scrap & build • 出来ること 出来ないこと たち Appendix Reference https://github.com/Microsoft/CNTK https://docs.microsoft.com/en-us/cognitive-toolkit/setup-cntk-on-your- machine https://github.com/Microsoft/VoTT https://github.com/onnx/onnx https://www.edx.org/course/deep-learning-explained https://www.edx.org/course/computer-vision-and-image-analysis Reference https://github.com/kyoro1/decode2018 https://youtu.be/s5W4c9q_SAw https://youtu.be/ziuEXyJ9SNs https://youtu.be/SYTw5OLBnzc https://youtu.be.com/70FMOdVUNPI https://youtu.be/j7OjxFY2Go8 https://youtu.be/RCuGLRtggrI © 2018 Microsoft Corporation. All rights reserved. 本コンテンツの著作権、および本コンテンツ中に出てくる商標権、団体名、ロゴ、製品、サービスなどはそれぞれ、各権利保有者に帰属します。.
Recommended publications
  • Music Instrument Localization in Virtual Reality Environments Using Audio-Visual Cues
    Music Instrument Localization in Virtual Reality Environments using audio-visual cues Siddhartha Bhattacharyya B.Tech A Dissertation Presented to the University of Dublin, Trinity College in partial fulfilment of the requirements for the degree of Master of Science in Computer Science (Data Science) Supervisor: Aljosa Smolic and Cagri Ozcinar September 2020 Declaration I, the undersigned, declare that this work has not previously been submitted as an exercise for a degree at this, or any other University, and that unless otherwise stated, is my own work. Siddhartha Bhattacharyya September 6, 2020 Permission to Lend and/or Copy I, the undersigned, agree that Trinity College Library may lend or copy this thesis upon request. Siddhartha Bhattacharyya September 6, 2020 Acknowledgments I would like to thank my supervisors Professors Aljosa Smolic and Cagri Ozcinar for their dedicated support, understanding and leadership. My heartfelt gratitude goes out to my peers and the batch of 2019-2020 for their support and friendship. I would also like to thank the Trinity VR community for their help. I would like to extend my sincerest gratitude to the admins at SCSS labs who gave endless support in ensuring the availability of remote servers. During this time of crisis and remote work, this dissertation would not have been possible without their support. Last but not the least, I would like to thank my family for their trust and belief in me. Siddhartha Bhattacharyya University of Dublin, Trinity College September 2020 iii Music Instrument Localization in Virtual Reality Environments using audio-visual cues Siddhartha Bhattacharyya, Master of Science in Computer Science University of Dublin, Trinity College, 2020 Supervisor: Aljosa Smolic and Cagri Ozcinar This research work aims to develop and assess the capabilities of convolution neu- ral networks to identify and localize musical instruments in 360 videos.
    [Show full text]
  • Evaluating Usage of Images for App Classification
    Evaluating Usage of Images for App Classification Kushal Singla, Niloy Mukherjee, Hari Manassery Koduvely, Joy Bose Samsung R&D Institute Bangalore, India [email protected] Abstract— App classification is useful in a number of In this paper, we seek to evaluate different methods in applications such as adding apps to an app store or building a which app images can be used to improve the accuracy of the user model based on the installed apps. Presently there are a app classification. One such method involves extracting text number of existing methods to classify apps based on a given from the app images using optical character recognition taxonomy on the basis of their text metadata. However, text (OCR) and using the extracted text to classify the app. based methods for app classification may not work in all cases, Another method involves generating text descriptions of the such as when the text descriptions are in a different language, app images by summarizing the images using a tool, and or missing, or inadequate to classify the app. One solution in using the resulting text descriptions for the app classification. such cases is to utilize the app images to supplement the text Yet another method involves identifying the objects in the description. In this paper, we evaluate a number of approaches in which app images can be used to classify the apps. In one app images and using the identified objects to classify the approach, we use Optical character recognition (OCR) to app. An ensemble of such different models can also be used, extract text from images, which is then used to supplement the perhaps along with text based classification of apps.
    [Show full text]
  • Rahway Leagues
    Rahway Public Library 1175 13* - George Avev. PAGE 20 THURSDAY, JUNE 8, 1972 RAHWAY NEWS-RECORD/CLARK PATRIOT Rahway, N, J. 07065 r— • I asa Bpl BBfl £EB9 BB ItfST^ffl 6ffXl> Academy Honors I •Z-M Michael James Angelo of 4 p.m. at the Roselle school. tics and science. Robert Clark Ward and Rahway will be the valedic- Mr. Simon will be grad- The rwo honor students James Dennis Zupkus. Senator Case £ are among the 12 Railway THIS WEEK torian and John Philip Simon uated with highest honors in Clark students are Ray- residents and five Clark re- -Oj£-Clark the salutatorian at history and French and sec- mond Edward Hirsche, Tho- Since Commissioner T. a letter giving a . United Statee Senator * sidents who are candidates tte commencement ex- ond highest honors In English. mas Edward Juzefyk, Rob- Ritter was the only member tentative encroachmentllne," Clifford P. Case of Rahway *fi for graduation. Mr. Baker said. "We're in 2 excises of RoseUe Catholic Mr. Angelo will receive the ert Michael Olearczyk, John present out of the 10-member has been given the Citizen's 0 NEW JERSEY'S OLDEST WEEKLY NEWSPAPER EST. 1822 High School on Saturday at highest honors In mathema- Rah way aru dents are Ri- Philip Simon and Joseph panel of the New Jersey the process of getting signa- tures from homeowners Award of the Academy of fl chard Joseph Adinolfi, Mi- Benedict Sutter. State Water Policy Council, Medicine of New Jersey for |i chael James Angelo, Ray- participants agreed to a post- along this stream to give the 388-3388 city rights to dig-up land up his support of health legisla- 2 mong Edward Duffy, Gary- ponement of the special hear- tion in Congress.
    [Show full text]
  • City of Lake Stevens Vision Statement by 2030, We Are a Sustainable
    City Council Meeting November 24, 2020 Page 1 of 126 City of Lake Stevens Vision Statement By 2030, we are a sustainable community around the lake with a vibrant economy, unsurpassed infrastructure and exceptional quality of life. CITY COUNCIL REGULAR MEETING AGENDA REMOTE ACCESS ONLY – VIA ZOOM Tuesday, November 24, 2020 – 7:00 p.m. Join Zoom Meeting: https://us02web.zoom.us/j/87181345762 or call in at 253-215-8782, Meeting ID: 871 8134 5762 CALL TO ORDER Mayor PLEDGE OF ALLEGIANCE Mayor ROLL CALL City Clerk APPROVAL OF AGENDA Council President CITIZEN COMMENTS Mayor GUEST BUSINESS Introduction of new Stormwater Coordinator Eric Shannon Farrant COUNCIL BUSINESS Council President MAYOR’S BUSINESS Mayor CITY DEPARTMENT REPORT Update Gene CONSENT AGENDA A Vouchers Barb B City Council Regular Meeting Minutes of Kelly November 10, 2020 C City Council Workshop Meeting Minutes of Kelly November 17, 2020 D Ordinance 1104 Amending Lake Stevens Kelly Municipal Code Concerning the Start Time for Regularly Scheduled City Council Meetings E Revised Resolution 2020-19 Machias Russ Industrial Annexation City Council Meeting November 24, 2020 Page 2 of 126 Lake Stevens City Council Regular Meeting Agenda November 24, 2020 PUBLIC HEARING F Ordinance 1103 Multifamily Housing Tax Sabrina Exemption Program Regulations G Ordinance 1101 – 2021 Budget Barb/Josh ACTION ITEMS: H Professional Services Agreement with Shannon/ Davido Consulting Group, Inc Aaron ADJOURN THE PUBLIC IS INVITED TO ATTEND Special Needs The City of Lake Stevens strives to provide accessible opportunities for individuals with disabilities. Please contact Human Resources, City of Lake Stevens ADA Coordinator, (425) 622-9400, at least five business days prior to any City meeting or event if any accommodations are needed.
    [Show full text]
  • Cross-Model Image Annotation Platform with Active Learning
    Cross-Model Image Annotation Platform with Active Learning Ng Hui Xian Lynnette1, Henry Ng Siong Hock1,2, and Nguwi Yok Yen2 1 Defence Science Technology Agency, Singapore, {nhuixia1, hngsiong}@dsta.gov.sg 2 National Technological University, Singapore, [email protected] Abstract. We have seen significant leapfrog advancement in machine learning in recent decades. The central idea of machine learnability lies on constructing learning algorithms that learn from good data. The avail‐ ability of more data being made publicly available also accelerates the growth of AI in recent years. In the domain of computer vision, the quality of image data arises from the accuracy of image annotation. Labelling large volume of image data is a daunting and tedious task. This work presents an End‐to‐End pipeline tool for object annotation and recognition aims at enabling quick image labelling. We have developed a modular image annotation platform which seamlessly incorporates assisted image annotation (annotation assistance), active learning and model training and evaluation. Our approach provides a number of advantages over current image annotation tools. Firstly, the annotation assistance utilizes reference hierarchy and reference images to locate the objects in the images, thus reducing the need for annotating the whole object. Secondly, images can be annotated using polygon points allowing for objects of any shape to be annotated. Thirdly, it is also interoperable across several image models, and the tool provides an interface for object model training and evaluation across a series of pre‐ trained models. We have tested the model and embeds several benchmarking deep learning models. The highest accuracy achieved is 74%.
    [Show full text]
  • On the Hunt for Excited States
    INTERNATIONAL JOURNAL OF HIGH-ENERGY PHYSICS CERN COURIER VOLUME 45 NUMBER 10 DECEMBER 2005 On the hunt for excited states HOMESTAKE DARK MATTER SNOWMASS Future assured for Galactic gamma rays US workshop gets underground lab p5 may hold the key p 17 ready for the ILC p24 www.vectorfields.comi Music to your ears 2D & 3D electromagnetic modellinj If you're aiming for design excellence, demanding models. As a result millions you'll be pleased to hear that OPERA, of elements can be solved in minutes, the industry standard for electromagnetic leaving you to focus on creating modelling, gives you the most powerful outstanding designs. Electron trajectories through a TEM tools for engineering and scientific focussing stack analysis. Fast, accurate model analysis • Actuators and sensors - including Designed for parameterisation and position and NDT customisation, OPERA is incredibly easy • Magnets - ppm accuracy using TOSCA to use and has an extensive toolset, making • Electron devices - space charge analysis it ideal for a wide range of applications. including emission models What's more, its high performance analysis • RF Cavities - eigen modes and single modules work at exceptional levels of speed, frequency response accuracy and stability, even with the most • Motors - dynamic analysis including motion Don't take our word for it - order your free trial and check out OPERA yourself. B-field in a PMDC motor Vector Fields Ltd Culham Science Centre, Abingdon, Oxon, 0X14 3ED, U.K. Tel: +44 (0)1865 370151 Fax: +44 (0)1865 370277 Email: [email protected] Vector Fields Inc 1700 North Famsworth Avenue, Aurora, IL, 60505.
    [Show full text]
  • Downloadable At
    remote sensing Article LabelRS: An Automated Toolbox to Make Deep Learning Samples from Remote Sensing Images Junjie Li, Lingkui Meng, Beibei Yang, Chongxin Tao, Linyi Li and Wen Zhang * School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China; [email protected] (J.L.); [email protected] (L.M.); [email protected] (B.Y.); [email protected] (C.T.); [email protected] (L.L.) * Correspondence: [email protected]; Tel.: +86-027-68770771 Abstract: Deep learning technology has achieved great success in the field of remote sensing pro- cessing. However, the lack of tools for making deep learning samples with remote sensing images is a problem, so researchers have to rely on a small amount of existing public data sets that may influence the learning effect. Therefore, we developed an add-in (LabelRS) based on ArcGIS to help researchers make their own deep learning samples in a simple way. In this work, we proposed a feature merging strategy that enables LabelRS to automatically adapt to both sparsely distributed and densely distributed scenarios. LabelRS solves the problem of size diversity of the targets in remote sensing images through sliding windows. We have designed and built in multiple band stretching, image resampling, and gray level transformation algorithms for LabelRS to deal with the high spectral remote sensing images. In addition, the attached geographic information helps to achieve seamless conversion between natural samples, and geographic samples. To evaluate the reliability of LabelRS, we used its three sub-tools to make semantic segmentation, object detection and image classification samples, respectively.
    [Show full text]
  • CLARK PATRIOT : 1 1175 St
    * • - * V Rahway Public JLibrary RAHWAY NEWS-RECORD/CLARK PATRIOT : 1 1175 St. George Ave. PAGE 16- THURSDAY, JUNE ^S, 1972 Rahway, W, J. 0706$ 4- i NEW JERSEY'S OLDEST WEEKLY NEWSPAPER EST. 1822 15 CENTS RAHWAY, NEW JERSEY, THURSDAY; JUNE 22, 1972 VOLUME 150, NO- 25 McDowell h fi rhi nu-eting in an Appointment—oY 17 tea- The Rahway Board of hducation voted, 5-4, on Monday night alter Dr. Sp.ruwls and Mr 1 chers to die staff of the in Roosevelt School auditorium :u retain Stewart M. Hurt of effort tu prevent Mr. "b aj[ oinrniiT.i. N r Kahn ruled Secondary Summer School WoodbriJge as special counsel to represent the board in -he at the urrw_- that a quorum was nut j r«. J:,J that nu business was authorized by the Rah- action started to have Louis R. Rlzzo removedfrom the board. could be conJuctc-d. way Board of Education at Voting against hiring Mr. Hjtt to defend the board in the A petition asking Dr. M. rbur^t-r tlu- appointment of Mr. Kizz^ was filed L>> Dr. S\ ruwls, M r_.Bt-ckhusen, Mr. Monday- night's meetings in action were hrlc H. Beckhusen, Harry W.McDmvell, Joseph ii Roosevelt School. L. Keefe and Dr.-John J, Sprowls, president, who petitioned McDowell and. M r;~~Knee*fTr; I "i^j •!• -clanc tiut the- appqintxnen: was "void, illegal, ultra-virt-s, ol ' ar. i effect, arui was commissioner uf education. , the appointments will be to remove Mr. Kizzo. The votes in favor of engaging counsel , awd-tn—vttrltltion of the \ only-if-enrolfrnenr-iinr:- 'were cast by Louts G.
    [Show full text]
  • Object Detection Using CNN Models
    Självständigt arbete på avancerad nivå Independent degree project − second cycle Elektroteknik Electronics Engineering Machine visual feedback through CNN detectors Mobile object detection for industrial application Kastriot Rexhaj Machine visual feedback through CNN detectors - Mobile object detection for industrial application 2019-06-13 Kastriot Rexhaj Examensarbete inom Elektroteknik, 30 poäng Machine visual feedback through CNN detectors Mobile object detection for industrial application Kastriot Rexhaj MID SWEDEN UNIVERSITY Electronics design division Examiner: Mattias O’Nils, [email protected] Supervisor: Benny Thörnberg, [email protected] Author: Kastriot Rexhaj, [email protected] Degree programme: Master of Science in Engineering, Electronics Engineering, 300 credits Main field of study: Electronics Engineering Semester, year: Spring (VT), 2019 Based on the Mid Sweden University template for technical reports, written by Magnus Eriksson, Kenneth Berg and Mårten Sjöström. ii Machine visual feedback through CNN 0 Sammanfattning detectors - Mobile object detection for industrial application 2019-06-13 Kastriot Rexhaj Sammanfattning Den här rapporten behandlar objekt detektering som en möjlig lösning på Valmets efterfrågan av ett visuellt återkopplingssystem som kan hjälpa operatörer och annan personal att lättare interagera med maskiner och utrustning. Nya framsteg inom djupinlärning har dem senaste åren möj- liggjort framtagande av neurala nätverksarkitekturer med detekterings- förmågor. Då industrisektorn svårare tar till sig högst specialiserade al- goritmer och komplexa bildbehandlingsmetoder (som tidigare varit fallet med objekt detektering) så ger djupinlärningsmetoder istället upphov till att skapa självlärande system som är återanpassningsbara och närmast intuitiva i dem fall där sådan teknologi åberopas. Den här studien har därför valt att studera ett par sådana teknologier för att hitta möjliga im- plementeringar som kan realiseras på något så enkelt som en mobiltele- fon.
    [Show full text]
  • Meeting of the Technical Advisory Council (TAC)
    Meeting of the Technical Advisory Council (TAC) January 28, 2021 Anti-Trust Policy Recording of Calls Ibrahim Haddad Howard Huang (Huawei), Kevin Wang Justin Digrazia (Salesforce), Joaquin Prado (LF)) TAC Voting Members * = still need backup specified on wiki Approval of January 14th, 2021 Minutes Welcome new associate member! Galgotias University Welcome new general member! vmware Discussion and Vote New Project Stages Document Presenter (January 14th TAC) - Ibrahim Haddad LF AI & Data Project Lifecycle Document Ibrahim Haddad, Ph.D. Executive Director, LF AI & Data [email protected] Background The LF AI & Data Project Lifecycle Document defines the project levels, requirements to be accepted in each level, process and various associated details. It is approved by the Technical Advisory Council (TAC) and then the Governing Board (GB). Current version dates May 2018. Revisiting the Document Over 2 years since the document was last revised. A lot of progress has been made in terms of new projects joining. A lot of experience gained in onboarding projects and insights on improvements to be made including higher the bar to join the foundation and to graduate. We’ve received numerous feedback and examined how more mature umbrella foundation operate and structure their projects’ stages and lifecycle. Key Updates Introduced to the Document Introducing Sandbox stage Improving requirements to incubate projects Improving requirements to graduate projects Adding specific language to clarify the benefits for projects hosted in every stage Elaborating on the Archive Stage projects to eliminate ambiguities Adding information on the Annual Review of projects General edits for the purpose of clarity 1. Introducing Sandbox Stage This stage is specific to projects that meet one of the following requirements: ● Any project that intends to join LF AI & Data Incubation in the future and wishes to lay the foundations for that.
    [Show full text]
  • PDF/A Format
    A Knowledge-based Approach for Creating Detailed Landscape Representations by Fusing GIS Data Collections with Associated Uncertainty Pedro Maroun Eid A Thesis in The Department of Computer Science & Software Engineering Presented in Partial Fulfillment of the Requirements For the Degree of Doctor of Philosophy Concordia University Montréal, Québec, Canada January 2014 ©Pedro Maroun Eid, 2014 CONCORDIA UNIVERSITY SCHOOL OF GRADUATE STUDIES This is to certify that the thesis prepared By: Pedro Maroun Eid Entitled: A Knowledge-based Approach for Creating Detailed Landscape Representations by Fusing GIS Data Collections with Associated Uncertainty and submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Science) complies with the regulations of the University and meets the accepted standards with respect to originality and quality. Signed by the final examining committee: Chair Dr. M. Mehmet Ali External Examiner Dr. V. Bhavsar External to Program Dr. R. Ganesan Examiner Dr. V. Haarslev Examiner Dr. T. Fevens Thesis Supervisor Dr. S. Mudur Approved by: Dr. V. Haarslev , Graduate Program Director June 11, 2014 Dr. C. Trueman, Interim Dean Faculty of Engineering and Computer Science ABSTRACT A Knowledge-based Approach for Creating Detailed Landscape Representations by Fusing GIS Data Collections with Associated Uncertainty Pedro Maroun Eid, Ph.D. Concordia University, 2014 Geographic Information Systems (GIS) data for a region is of different types and collected from different sources, such as aerial digitized color imagery, elevation data consisting of terrain height at different points in that region, and feature data consisting of geometric information and properties about entities above/below the ground in that region.
    [Show full text]
  • Analytical and Other Software on the Secure Research Environment
    Analytical and other software on the Secure Research Environment The Research Environment runs with the operating system: Microsoft Windows Server 2016 DataCenter edition. The environment is based on the Microsoft Data Science virtual machine template and includes the following software: • R Version 4.0.2 (2020-06-22), as part of Microsoft R Open • R Studio Desktop 1.3.1093 working with R 4.0.2 • Anaconda 3, including an environment for Python 3.8.5 • Python, 3.8.5 as part of the Anaconda base environment • Jupyter Notebook, as part of the Anaconda3 environment • Microsoft Office 2016 Standard edition, including Word, Excel, PowerPoint, and OneNote (Access not included) • JuliaPro 0.5.1.1 and the Juno IDE for Julia • PyCharm Community Edition, 2020.3 • PLINK • JAGS • WinBUGS • OpenBUGS • stan and rstan • Apache Spark 2.2.0 • SparkML and pySpark • Apache Drill 1.11.0 • MAPR Drill driver • VIM 8.0.606 • TensorFlow • MXNet, MXNet Model Server • Microsoft Cognitive Toolkit (CNTK) • Weka • Vowpal Wabbit • xgboost • Team Data Science Process (TDSP) Utilities • VOTT (Visual Object Tagging Tool) 1.6.11 • Microsoft Machine Learning Server • PowerBI • Docker version 10.03.5, build 2ee0c57608 • SQL Server Developer Edition (2017), including Management Studio and SQL Server Integration Services (SSIS) • Visual Studio Code 1.17.1 • Nodejs • 7-zip • Evince PDF Viewer • Acrobat Reader • Microsoft Photo Viewer • PowerShell 6 March 2021 Version 1.2 And in the Premium research environments: • STATA 16.1 • SAS 9.4, m4 (academic license) Users also have the ability to bring in additional software if the software was specified in the data request, the software runs in the operating system described above, and the user can provide Vivli with any necessary licensing keys.
    [Show full text]