Snuba: Automating Weak Supervision to Label Training Data

Total Page:16

File Type:pdf, Size:1020Kb

Snuba: Automating Weak Supervision to Label Training Data Snuba: Automating Weak Supervision to Label Training Data Paroma Varma Christopher Re´ Stanford University Stanford University [email protected] [email protected] ABSTRACT benign, malignant if area > 210.8: As deep learning models are applied to increasingly diverse return False Label if area < 150: Aggregator problems, a key bottleneck is gathering enough high-quality return Abstain training labels tailored to each task. Users therefore turn Labeled Data 25% benign, 75%benign, ... ?, ?, ?, ? if perim > 120: to weak supervision, relying on imperfect sources of labels return True like pattern matching and user-defined heuristics. Unfor- if perim > 80 Terminate? return Abstain tunately, users have to design these sources for each task. Heuristic Generation Training Labels This process can be time consuming and expensive: domain Unlabeled Data Snuba experts often perform repetitive steps like guessing optimal numerical thresholds and developing informative text pat- Figure 1: Snuba uses a small labeled and a large unlabeled terns. To address these challenges, we present Snuba,a dataset to iteratively generate heuristics. It uses existing la- system to automatically generate heuristics using a small bel aggregators to assign training labels to the large dataset. labeled dataset to assign training labels to a large, unla- beled dataset in the weak supervision setting. Snuba gen- pervision, or methods that can assign noisy training labels erates heuristics that each labels the subset of the data it to unlabeled data, like crowdsourcing [9, 22, 60], distant su- is accurate for, and iteratively repeats this process until the pervision [8,32], and user-defined heuristics [38,39,50]. Over heuristics together label a large portion of the unlabeled the past few years, we have been part of the broader effort data. We develop a statistical measure that guarantees the to enhance methods based on user-defined heuristics to ex- iterative process will automatically terminate before it de- tend their applicability to text, image, and video data for grades training label quality. Snuba automatically generates tasks in computer vision, medical imaging, bioinformatics heuristics in under five minutes and performs up to 9.74 F1 and knowledge base construction [4, 39, 50]. points better than the best known user-defined heuristics de- Through our engagements with users at large companies, veloped over many days. In collaborations with users at re- we find that experts spend a significant amount of time de- search labs, Stanford Hospital, and on open source datasets, signing these weak supervision sources. As deep learning Snuba outperforms other automated approaches like semi- techniques are adopted for unconventional tasks like analyz- supervised learning by up to 14.35 F1 points. ing codebases and now commodity tasks like driving market- PVLDB Reference Format: ing campaigns, the few domain experts with required knowl- Paroma Varma, Christopher R´e. Snuba: Automating Weak Su- edge to write heuristics cannot reasonably keep up with the pervision to Label Training Data. PVLDB, 12(3): 223-236, 2018. demand for several specialized, labeled training datasets. DOI: https://doi.org/10.14778/3291264.3291268 Even machine learning experts, such as researchers at the computer vision lab at Stanford, are impeded by the need to 1. INTRODUCTION crowdsource labels before even starting to build models for The success of machine learning for tasks like image recog- novel visual prediction tasks [23, 25]. This raises an impor- nition and natural language processing [12, 14] has ignited tant question: can we make weak supervision techniques eas- interest in using similar techniques for a variety of tasks. ier to adopt by automating the process of generating heuris- However, gathering enough training labels is a major bot- tics that assign training labels to unlabeled data? tleneck in applying machine learning to new tasks. In re- The key challenge in automating weak supervision lies in sponse, there has been a shift towards relying on weak su- replacing the human reasoning that drives heuristic devel- opment. In our collaborations with users with varying levels of machine learning expertise, we noticed that the process to This work is licensed under the Creative Commons Attribution- develop these weak supervision sources can be fairly repeti- NonCommercial-NoDerivatives 4.0 International License. To view a copy tive. For example, radiologists at the Stanford Hospital and of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. For Clinics have to guess the correct threshold for each heuristic any use beyond those covered by this license, obtain permission by emailing that uses a geometric property of a tumor to determine if [email protected]. Copyright is held by the owner/author(s). Publication rights it is malignant (example shown in Figure 1). We instead licensed to the VLDB Endowment. take advantage of a small, labeled dataset to automatically Proceedings of the VLDB Endowment, Vol. 12, No. 3 ISSN 2150-8097. generate noisy heuristics. Though the labeled dataset is too DOI: https://doi.org/10.14778/3291264.3291268 small to train an end model, it has enough information to 223 generate heuristics that can assign noisy labels to a large, source libraries [35, 49] and data models in existing weak unlabeled dataset and improve end model performance by supervision frameworks [38, 58]. Primitives examples in our up to 12.12 F1 points. To aggregate labels from these heuris- evaluation include bag-of-words for text and bounding box tics, we improve over majority vote by relying on existing attributes for images. factor graph-based statistical techniques in weak supervi- Pruner for Diversity. To ensure that the set of heuristics sion that can model the noise in and correlation among these is diverse and assigns high-quality labels to a large portion heuristics [2,4,39,41,48,50]. However, these techniques were of the unlabeled data, the pruner (Section 3.2) ranks the intended to work with user-designed labeling sources and heuristics the synthesizer generates by the weighted average therefore have limits on how robust they are. Automatically of their performance on the labeled set and coverage on the generated heuristics can be noisier than what these models unlabeled set. It selects the best heuristic at each iteration can account for and introduce the following challenges: and adds it to the collection of existing heuristics. This Accuracy. Users tend to develop heuristics that assign ac- method performs up to 6.57 F1 points better than ranking curate labels to a subset of the unlabeled data. An auto- heuristics by performance only. mated method has to properly model this trade-off between Verifier to Determine Termination Condition. The accuracy and coverage for each heuristic based only on the verifier uses existing statistical techniques to aggregate la- small, labeled dataset. Empirically, we find that generating bels from the heuristics into probabilistic labels for the unla- heuristics that each labels all the datapoints can degrade beled datapoints [4,39,50]. However, the automated heuris- end model performance by up to 20.69 F1 points. tic generation process can surpass the noise levels to which Diversity. Since each heuristic has limited coverage, users these techniques are robust to and degrade end model per- develop multiple heuristics that each labels a different sub- formance by up to 7.09 F1 points. We develop a statisti- set to ensure a large portion of the unlabeled data receives cal measure that uses the small, labeled set to determine a label. In an automated approach, we could mimic this by whether the noise in the generated heuristics is below the maximizing the number of unlabeled datapoints the heuris- threshold these techniques can handle (Section 4). tics label as a set. However, this approach can select heuris- Contribution Summary. We describe Snuba, a system tics that cover a large portion of the data but have poor to automatically generate heuristics using a small labeled performance. There is a need to account for both the diver- dataset to assign training labels to a large, unlabeled dataset sity and performance of the heuristics as a set. Empirically, in the weak supervision setting. A summary of our contri- balancing both aspects improves end model performance by butions are as follows: up to 18.20 F1 points compared to selecting the heuristic set that labels the most datapoints. • We describe the system architecture, the iterative pro- Termination Condition. Users stop generating heuristics cess of generating heuristics, and the optimizers used when they have exhausted their domain knowledge. An au- in the three components (Section 3). We also show tomated method, however, can continue to generate heuris- that our automated optimizers can affect end model tics that deteriorate the overall quality of the training labels performance by up to 20.69 F1 points (Section 5). assigned to the unlabeled data, such as heuristics that are • We present a theoretical guarantee that Snuba will ter- worse than random for the unlabeled data. Not account- minate the iterative process before the noise in heuris- ing for performance on the unlabeled dataset can affect end tics surpasses the threshold to which statistical tech- model performance by up to 7.09 F1 points. niques are robust (Section 4). This theoretical result Our Approach. To address the challenges above, we in- translates to improving end model performance by up troduce Snuba, an automated system that takes as input to 7.09 F1 points compared to generating as many a small labeled and a large unlabeled dataset and outputs heuristics as possible (Section 5). probabilistic training labels for the unlabeled data, as shown in Figure 1. These labels can be used to train a downstream • We evaluate our system in Section 5 by using Snuba la- machine learning model of choice, which can operate over bels to train downstream models, which generalize be- the raw data and generalize beyond the heuristics Snuba yond the heuristics Snuba generates.
Recommended publications
  • Generating Multi-Agent Trajectories Using Programmatic Weak
    Published as a conference paper at ICLR 2019 GENERATING MULTI-AGENT TRAJECTORIES USING PROGRAMMATIC WEAK SUPERVISION Eric Zhan Stephan Zheng∗ Caltech Salesforce [email protected] [email protected] Yisong Yue Long Sha & Patrick Lucey Caltech STATS [email protected] flsha,[email protected] ABSTRACT We study the problem of training sequential generative models for capturing co- ordinated multi-agent trajectory behavior, such as offensive basketball gameplay. When modeling such settings, it is often beneficial to design hierarchical mod- els that can capture long-term coordination using intermediate variables. Further- more, these intermediate variables should capture interesting high-level behavioral semantics in an interpretable and manipulatable way. We present a hierarchical framework that can effectively learn such sequential generative models. Our ap- proach is inspired by recent work on leveraging programmatically produced weak labels, which we extend to the spatiotemporal regime. In addition to synthetic settings, we show how to instantiate our framework to effectively model complex interactions between basketball players and generate realistic multi-agent trajec- tories of basketball gameplay over long time periods. We validate our approach using both quantitative and qualitative evaluations, including a user study compar- ison conducted with professional sports analysts.1 1 INTRODUCTION The ongoing explosion of recorded tracking data is enabling the study of fine-grained behavior in many domains: sports (Miller et al., 2014; Yue et al., 2014; Zheng et al., 2016; Le et al., 2017), video games (Ross et al., 2011), video & motion capture (Suwajanakorn et al., 2017; Taylor et al., 2017; Xue et al., 2016), navigation & driving (Ziebart et al., 2009; Zhang & Cho, 2017; Li et al., 2017), lab- oratory animal behaviors (Johnson et al., 2016; Eyjolfsdottir et al., 2017), and tele-operated robotics (Abbeel & Ng, 2004; Lin et al., 2006).
    [Show full text]
  • Constrained Labeling for Weakly Supervised Learning
    Constrained Labeling for Weakly Supervised Learning Chidubem Arachie1 Bert Huang2 1Department of Computer Science, Virginia Tech, Blacksburg, Virginia, USA 2Department of Computer Science, Data Intensive Studies Center, Tufts University, Medford, Massachusetts, USA Abstract sion. Weakly supervised learning involves training models using noisy labels. Using multiple sources or forms of weak supervision is common, as it provides diverse information Curation of large fully supervised datasets has be- to the model. However, each source of weak supervision has come one of the major roadblocks for machine its own bias that can be transmitted to the model. Different learning. Weak supervision provides an alterna- weak supervision signals can also conflict, overlap, or—in tive to supervised learning by training with cheap, the worst case—make dependent errors. Thus, a naive com- noisy, and possibly correlated labeling functions bination of these weak signals would hurt the quality of from varying sources. The key challenge in weakly a learned model. The key problem then is how to reliably supervised learning is combining the different combine various sources of weak signals to train an accurate weak supervision signals while navigating mislead- model. ing correlations in their errors. In this paper, we propose a simple data-free approach for combin- To solve this problem, we propose constrained label learn- ing weak supervision signals by defining a con- ing (CLL), a method that processes various weak supervi- strained space for the possible labels of the weak sion signals and combines them to produce high-quality signals and training with a random labeling within training labels. The idea behind CLL is that, given the weak this constrained space.
    [Show full text]
  • Multi-Task Weak Supervision Enables Automated Abnormality Localization in Whole-Body FDG-PET/CT
    Multi-task weak supervision enables automated abnormality localization in whole-body FDG-PET/CT Sabri Eyuboglu∗1, Geoffrey Angus∗1, Bhavik N. Patel2, Anuj Pareek2, Guido Davidzon2, Jared Dunnmon,∗∗1 Matthew P. Lungren ∗∗2 1 Department of Computer Science, Stanford University, Stanford, CA 94305, USA 2 Department of Radiology, Stanford University, Stanford, CA 94305, USA ∗ ∗∗ Equal contribution; Computational decision support systems could provide clinical value in whole-body FDG- PET/CT workflows. However, the lack of labeled data combined with the sheer size of each scan make it challenging to apply existing supervised machine learning systems. Leveraging recent advancements in natural language processing, we develop a weak supervision frame- work that extracts imperfect, yet highly granular regional abnormality labels from the radi- ologist reports associated with each scan. Our framework automatically labels each region in a custom ontology of anatomical regions, providing a structured profile of the pathologies in each scan. We design an attention-based, multi-task CNN architecture that can be trained using these generated labels to localize abnormalities in whole-body scans. We show that our multi-task representation is critical for strong performance on rare abnormalities for which we have little data and for accurately predicting mortality from imaging data alone (AUROC 80%), a critical challenge for palliative care teams. 1 Introduction The availability of large, labeled datasets has fueled recent progress in machine learning for med- ical imaging. Breakthrough studies in chest radiograph diagnosis, skin and mammographic lesion classification, and diabetic retinopathy detection have relied on hundreds of thousands of labeled training examples [1–4].
    [Show full text]
  • A Survey on Data Collection for Machine Learning a Big Data - AI Integration Perspective
    1 A Survey on Data Collection for Machine Learning A Big Data - AI Integration Perspective Yuji Roh, Geon Heo, Steven Euijong Whang, Senior Member, IEEE Abstract—Data collection is a major bottleneck in machine learning and an active research topic in multiple communities. There are largely two reasons data collection has recently become a critical issue. First, as machine learning is becoming more widely-used, we are seeing new applications that do not necessarily have enough labeled data. Second, unlike traditional machine learning, deep learning techniques automatically generate features, which saves feature engineering costs, but in return may require larger amounts of labeled data. Interestingly, recent research in data collection comes not only from the machine learning, natural language, and computer vision communities, but also from the data management community due to the importance of handling large amounts of data. In this survey, we perform a comprehensive study of data collection from a data management point of view. Data collection largely consists of data acquisition, data labeling, and improvement of existing data or models. We provide a research landscape of these operations, provide guidelines on which technique to use when, and identify interesting research challenges. The integration of machine learning and data management for data collection is part of a larger trend of Big data and Artificial Intelligence (AI) integration and opens many opportunities for new research. Index Terms—data collection, data acquisition, data labeling, machine learning F 1 INTRODUCTION E are living in exciting times where machine learning expertise. This problem applies to any novel application that W is having a profound influence on a wide range of benefits from machine learning.
    [Show full text]
  • Visual Learning with Weak Supervision
    Visual Learning with Weak Supervision Safa Cicek Committee: Prof. Stefano Soatto, Chair Prof. Lieven Vandenberghe Prof. Paulo Tabuada Prof. Guy Van den Broeck PHD Defense January 2021 UCLA ELECTRICAL AND COMPUTER ENGINEERING Table of Contents Introduction SaaS: Speed as a Supervisor for Semi-supervised Learning [1] Input and Weight Space Smoothing for Semi-supervised Learning [2] Unsupervised Domain Adaptation via Regularized Conditional Alignment [3] Disentangled Image Generation for Unsupervised Domain Adaptation [4] Spatial Class Distribution Shift in Unsupervised Domain Adaptation [5] Learning Topology from Synthetic Data for Unsupervised Depth Completion [6] Targeted Adversarial Perturbations for Monocular Depth Prediction [7] Concluding Remarks [1] Cicek, Safa, Alhussein Fawzi, and Stefano Soatto. Saas: Speed as a supervisor for semi-supervised learning. Proceedings of the European Conference on Computer Vision (ECCV). 2018. [2] Cicek, Safa, and Stefano Soatto. Input and Weight Space Smoothing for Semi-supervised Learning. Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops. 2019. [3] Cicek, Safa, and Stefano Soatto. Unsupervised domain adaptation via regularized conditional alignment. Proceedings of the IEEE International Conference on Computer Vision (ICCV). 2019. [4] Cicek, Safa, Zhaowen Wang, Hailin Jin, Stefano Soatto, Generative Feature Disentangling for Unsupervised Domain Adaptation. Proceedings of the European Conference on Computer Vision (ECCV) Workshops. (2020). [5] Cicek, Safa, Ning Xu, Zhaowen Wang, Hailin Jin, Stefano Soatto, Spatial Class Distribution Shift in Unsupervised Domain Adaptation. Asian Conference on Computer Vision (ACCV). 2020. [6] Wong Alex, Safa Cicek, Stefano Soatto, Learning Topology from Synthetic Data for Unsupervised Depth Completion, IEEE Robotics and Automation Letters (RAL). 2021. [7] Wong Alex, Safa Cicek, Stefano Soatto, Targeted Adversarial Perturbations for Monocular Depth Prediction.
    [Show full text]
  • Natural Language Processing and Weak Supervision
    Natural language processing and weak supervision L´eonBottou COS 424 { 4/27/2010 Introduction Natural language processing \from scratch" { Natural language processing systems are heavily engineered. { How much engineering can we avoid by using more data ? { Work by Ronan Collobert, Jason Weston, and the NEC team. Summary { Natural language processing { Embeddings and models { Lots of unlabeled data { Task dependent hacks 2/43 COS 424 { 4/27/2010 I. Natural language processing The Goal We want to have a conversation with our computer . still a long way before HAL 9000 . Convert a piece of English into a computer-friendly data structure How to measure if the computer \understands" something? 4/43 COS 424 { 4/27/2010 Natural Language Processing Tasks Intermediate steps to reach the goal? Part-Of-Speech Tagging (POS): syntactic roles (noun, adverb...) Chunking (CHUNK): syntactic constituents (noun phrase, verb phrase...) Name Entity Recognition (NER): person/company/location... Semantic Role Labeling (SRL): semantic role [John]ARG0 [ate]REL [the apple]ARG1 [in the garden]ARGM LOC − 5/43 COS 424 { 4/27/2010 NLP Benchmarks Datasets: ? POS, CHUNK, SRL: WSJ( up to 1M labeled words) ? NER: Reuters( 200K labeled≈ words) ≈ System Accuracy System F1 Shen, 2007 97.33% Shen, 2005 95.23% Toutanova, 2003 97.24% Sha, 2003 94.29% Gimenez, 2004 97.16% Kudoh, 2001 93.91% (a) POS: As in (Toutanova, 2003) (b) CHUNK: CoNLL 2000 System F1 System F1 Ando, 2005 89.31% Koomen, 2005 77.92% Florian, 2003 88.76% Pradhan, 2005 77.30% Kudoh, 2001 88.31% Haghighi, 2005
    [Show full text]
  • Participant Guide
    Participant Guide National High Adventure Sea Base, BSA Sea Base Scuba Programs Islamorada, Florida Scuba Adventure Scuba Certification Scuba Live Aboard Revised Date: 2.23.2021 Mission of the Boy Scouts of America The mission of the Boy Scouts of America is to prepare young people to make ethical and moral choices over their lifetime by instilling in them the values of the Scout Oath and Law. Scout Oath On my honor I will do my best to do my duty to God and my country and to obey the Scout Law; to help other people at all times; to keep myself physically strong, mentally awake, and morally straight. Scout Law A Scout is: Trustworthy. Loyal. Helpful. Friendly. Courteous. Kind. Obedient. Cheerful. Thrifty. Brave. Clean. Reverent. Mission Statement of Sea Base, BSA It is the mission of Sea Base to serve councils and units by providing an outstanding high adventure experience for older Boy Scouts, Varsity Scouts, Venturers, Sea Scouts and their leaders. Sea Base programs are designed to achieve the principal aims of the Boy Scouts of America: • To build character • To foster citizenship • To develop physical, mental, and emotional fitness Keys Blessing Bless the creatures of the Sea Bless this person I call me Bless the Keys, you make so grand Bless the sun that warms the land Bless the fellowship we feel As we gather for this meal Amen Page | 2 Table of Contents General Eligibility Requirements ................................................................................................................. 4 General Eligibility at a Glance
    [Show full text]
  • Learning from Noisy Labels with Deep Neural Networks: a Survey Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee
    UNDER REVIEW 1 Learning from Noisy Labels with Deep Neural Networks: A Survey Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee Abstract—Deep learning has achieved remarkable success in 100% 100% numerous domains with help from large amounts of big data. Noisy w/o. Reg. Noisy w. Reg. 75% However, the quality of data labels is a concern because of the 75% Clean w. Reg Gap lack of high-quality labels in many real-world scenarios. As noisy 50% 50% labels severely degrade the generalization performance of deep Noisy w/o. Reg. neural networks, learning from noisy labels (robust training) is 25% Accuracy Test 25% Training Accuracy Training Noisy w. Reg. becoming an important task in modern deep learning applica- Clean w. Reg tions. In this survey, we first describe the problem of learning with 0% 0% 1 30 60 90 120 1 30 60 90 120 label noise from a supervised learning perspective. Next, we pro- Epochs Epochs vide a comprehensive review of 57 state-of-the-art robust training Fig. 1. Convergence curves of training and test accuracy when training methods, all of which are categorized into five groups according WideResNet-16-8 using a standard training method on the CIFAR-100 dataset to their methodological difference, followed by a systematic com- with the symmetric noise of 40%: “Noisy w/o. Reg.” and “Noisy w. Reg.” are parison of six properties used to evaluate their superiority. Sub- the models trained on noisy data without and with regularization, respectively, sequently, we perform an in-depth analysis of noise rate estima- and “Clean w.
    [Show full text]
  • Introduction to Deep Learning in Signal Processing & Communications with MATLAB
    Introduction to Deep Learning in Signal Processing & Communications with MATLAB Dr. Amod Anandkumar Pallavi Kar Application Engineering Group, Mathworks India © 2019 The MathWorks, Inc.1 Different Types of Machine Learning Type of Machine Learning Categories of Algorithms • Output is a choice between classes Classification (True, False) (Red, Blue, Green) Supervised Learning • Output is a real number Regression Develop predictive (temperature, stock prices) model based on both Machine input and output data Learning Unsupervised • No output - find natural groups and Clustering Learning patterns from input data only Discover an internal representation from input data only 2 What is Deep Learning? 3 Deep learning is a type of supervised machine learning in which a model learns to perform classification tasks directly from images, text, or sound. Deep learning is usually implemented using a neural network. The term “deep” refers to the number of layers in the network—the more layers, the deeper the network. 4 Why is Deep Learning So Popular Now? Human Accuracy Source: ILSVRC Top-5 Error on ImageNet 5 Vision applications have been driving the progress in deep learning producing surprisingly accurate systems 6 Deep Learning success enabled by: • Labeled public datasets • Progress in GPU for acceleration AlexNet VGG-16 ResNet-50 ONNX Converter • World-class models and PRETRAINED PRETRAINED PRETRAINED MODEL MODEL CONVERTER MODEL MODEL TensorFlow- connected community Caffe GoogLeNet IMPORTER PRETRAINED Keras Inception-v3 MODEL IMPORTER MODELS 7
    [Show full text]
  • Learning from Minimally Labeled Data with Accelerated Convolutional Neural Networks Aysegul Dundar Purdue University
    Purdue University Purdue e-Pubs Open Access Dissertations Theses and Dissertations 4-2016 Learning from minimally labeled data with accelerated convolutional neural networks Aysegul Dundar Purdue University Follow this and additional works at: https://docs.lib.purdue.edu/open_access_dissertations Part of the Computer Sciences Commons, and the Electrical and Computer Engineering Commons Recommended Citation Dundar, Aysegul, "Learning from minimally labeled data with accelerated convolutional neural networks" (2016). Open Access Dissertations. 641. https://docs.lib.purdue.edu/open_access_dissertations/641 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. Graduate School Form 30 Updated ¡¢ ¡£¢ ¡¤ ¥ PURDUE UNIVERSITY GRADUATE SCHOOL Thesis/Dissertation Acceptance This is to certify that the thesis/dissertation prepared By Aysegul Dundar Entitled Learning from Minimally Labeled Data with Accelerated Convolutional Neural Networks For the degree of Doctor of Philosophy Is approved by the final examining committee: Eugenio Culurciello Chair Anand Raghunathan Bradley S. Duerstock Edward L. Bartlett To the best of my knowledge and as understood by the student in the Thesis/Dissertation Agreement, Publication Delay, and Certification Disclaimer (Graduate School Form 32), this thesis/dissertation adheres to the provisions of Purdue University’s “Policy of Integrity in Research” and the use of copyright material. Approved by Major Professor(s): Eugenio Culurciello Approved by: George R. Wodicka 04/22/2016 Head of the Departmental Graduate Program Date LEARNING FROM MINIMALLY LABELED DATA WITH ACCELERATED CONVOLUTIONAL NEURAL NETWORKS A Dissertation Submitted to the Faculty of Purdue University by Aysegul Dundar In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy May 2016 Purdue University West Lafayette, Indiana ii To my family: Nezaket, Cengiz and Deniz.
    [Show full text]
  • Towards a Perceptually Relevant Control Space
    Music sound synthesis using machine learning: Towards a perceptually relevant control space Fanny ROCHE PhD Defense 29 September 2020 Supervisor: Laurent GIRIN Co-supervisors: Thomas HUEBER Co-supervisors: Maëva GARNIER Co-supervisors: Samuel LIMIER Fanny ROCHE - PhD Defense - 29 September 2020 1 / 48 Context and Objectives Context and Objectives Fanny ROCHE - PhD Defense - 29 September 2020 2 / 48 Context and Objectives Context and Objectives Fanny ROCHE - PhD Defense - 29 September 2020 2 / 48 Context and Objectives Context and Objectives Fanny ROCHE - PhD Defense - 29 September 2020 2 / 48 Processed recordings ! e.g. sampling, wavetable, granular, ... + computationally efficient − very memory consuming Spectral modeling ! e.g. additive, subtractive, ... + close to human sound perception − numerous & very specific Physical modeling ! e.g. solving wave equations, modal synthesis, ... + physically meaningful controls − very specific sounds Context and Objectives Context and Objectives Music sound synthesis Abstract algorithms ! e.g. FM, waveshaping, ... + rich sounds − complex parameters Fanny ROCHE - PhD Defense - 29 September 2020 3 / 48 Spectral modeling ! e.g. additive, subtractive, ... + close to human sound perception − numerous & very specific Physical modeling ! e.g. solving wave equations, modal synthesis, ... + physically meaningful controls − very specific sounds Context and Objectives Context and Objectives Music sound synthesis Abstract algorithms ! e.g. FM, waveshaping, ... + rich sounds − complex parameters Processed recordings ! e.g. sampling, wavetable, granular, ... + computationally efficient − very memory consuming Fanny ROCHE - PhD Defense - 29 September 2020 3 / 48 Physical modeling ! e.g. solving wave equations, modal synthesis, ... + physically meaningful controls − very specific sounds Context and Objectives Context and Objectives Music sound synthesis Abstract algorithms ! e.g. FM, waveshaping, ... + rich sounds − complex parameters Processed recordings ! e.g.
    [Show full text]
  • Exploring Semi-Supervised Variational Autoencoders for Biomedical Relation Extraction
    Exploring Semi-supervised Variational Autoencoders for Biomedical Relation Extraction Yijia Zhanga,b and Zhiyong Lua* a National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, Maryland 20894, USA b School of Computer Science and Technology, Dalian University of Technology, Dalian, Liaoning 116023, China Corresponding author: Zhiyong Lu ([email protected]) Abstract The biomedical literature provides a rich source of knowledge such as protein-protein interactions (PPIs), drug-drug interactions (DDIs) and chemical-protein interactions (CPIs). Biomedical relation extraction aims to automatically extract biomedical relations from biomedical text for various biomedical research. State-of-the-art methods for biomedical relation extraction are primarily based on supervised machine learning and therefore depend on (sufficient) labeled data. However, creating large sets of training data is prohibitively expensive and labor-intensive, especially so in biomedicine as domain knowledge is required. In contrast, there is a large amount of unlabeled biomedical text available in PubMed. Hence, computational methods capable of employing unlabeled data to reduce the burden of manual annotation are of particular interest in biomedical relation extraction. We present a novel semi-supervised approach based on variational autoencoder (VAE) for biomedical relation extraction. Our model consists of the following three parts, a classifier, an encoder and a decoder. The classifier is implemented using multi-layer convolutional neural networks (CNNs), and the encoder and decoder are implemented using both bidirectional long short-term memory networks (Bi-LSTMs) and CNNs, respectively. The semi-supervised mechanism allows our model to learn features from both the labeled and unlabeled data.
    [Show full text]