Curriculum Vitae—Doug Decarlo

Total Page:16

File Type:pdf, Size:1020Kb

Curriculum Vitae—Doug Decarlo Curriculum Vitae—Doug DeCarlo Department of Computer Science [email protected] 110 Frelinghuysen Road http://www.cs.rutgers.edu/˜decarlo Piscataway, NJ 08854-8019 TEL: 732 445 2001 x1495 EDUCATION • Ph.D., University of Pennsylvania, Department of Computer and Information Science, July 1998. Dissertation: Generation, Estimation and Tracking of Faces Advisor: Dimitris Metaxas • B.S. in Computer Science/Mathematics, Carnegie Mellon University, 1991. • B.S. in Computer Engineering, Carnegie Mellon University, 1991. APPOINTMENTS • Associate Chair, Department of Computer Science Rutgers University, Fall 2006–present. • Associate Professor, Department of Computer Science with a joint appointment in the Rutgers Center for Cognitive Science (RuCCS) Rutgers University, 2005–present • Visiting Fellow, Department of Computer Science Princeton University, Fall 2002 • Assistant Professor, Department of Computer Science with a joint appointment in the Rutgers Center for Cognitive Science (RuCCS) Rutgers University, 1999–2005 • Postdoctoral researcher, Rutgers Center for Cognitive Science (RuCCS) Rutgers University, 1998–1999. REFEREED JOURNAL PUBLICATIONS • Exaggerated Shading for Depicting Shape and Detail. Szymon Rusinkiewicz, Mike Burns, Doug DeCarlo, In ACM Transactions on Graphics (Special Issue for ACM SIGGRAPH 2006), July 2006, 25(3), pages 1199–1205. • Line Drawings from Volume Data. Mike Burns, Janek Klawe, Adam Finkelstein, Szymon Rusinkiewicz, Doug DeCarlo, In ACM Transactions on Graphics (Special Issue for ACM SIGGRAPH 2005), July 2005, 24(3), pages 512–518. • Speaking with Hands: Creating Animated Conversational Characters from Recordings of Human Perfor- mance. Matthew Stone, Doug DeCarlo, Insuk Oh, Christian Rodriguez, Adrian Stere, Alyssa Lees and Chris Bregler, In ACM Transactions on Graphics (Special Issue for ACM SIGGRAPH 2004), July 2004, 23(3), pages 506–513. • Specifying and Animating Facial Signals for Discourse in Embodied Conversational Agents. Doug DeCarlo, Matthew Stone, Corey Revilla and Jennifer J. Venditti, In Computer Animation and Virtual Worlds, March 2004, 15(1), pages 27–38. • Suggestive Contours for Conveying Shape. Doug DeCarlo, Adam Finkelstein, Szymon Rusinkiewicz, Anthony Santella, In ACM Transactions on Graphics (Special Issue for ACM SIGGRAPH 2003), July 2003, 22(3), pages 848–855. • Focusing on the Essential: Considering Attention in Display Design. Patrick Baudisch, Doug DeCarlo, Andrew T. Duchowski, and Wilson S. Geisler, In Communications of the ACM, March 2003, 46(3), pages 60–66. • Stylization and Abstraction of Photographs. Doug DeCarlo and Anthony Santella, In ACM Transactions on Graphics (Special Issue for ACM SIGGRAPH 2002), July 2002, 21(3), pages 769–776. • Adjusting Shape Parameters using Model-Based Optical Flow Residuals. Doug DeCarlo and Dimitris Metaxas, In IEEE Transactions on Pattern Analysis and Machine Intelligence, June 2002, 24(6), pages 814–823. • Optical Flow Constraints on Deformable Models with Applications to Face Tracking. Doug DeCarlo and Dimitris Metaxas, In The International Journal of Computer Vision, July 2000, 38(2), pages 99–127. • Shape Evolution with Structural and Topological Changes using Blending. Doug DeCarlo and Dimitris Metaxas, In IEEE Transactions on Pattern Analysis and Machine Intelligence, November 1998, 20(11), pages 1186-1205. • Blended Deformable Models. Doug DeCarlo and Dimitris Metaxas, In IEEE Transactions on Pattern Analysis and Machine Intelligence, April 1996, 18(4), pages 443–448. REFEREED CONFERENCES AND WORKSHOPS • Separating Parts from 2D Shapes using Relatability. Xiaofeng Mi, Doug DeCarlo, In Proceedings of the 11th IEEE International Conference on Computer Vision (ICCV) 2007, October 2007, pages 1–8. • Highlight Lines for Conveying Shape. Doug DeCarlo, Szymon Rusinkiewicz, In International Symposium on Non-Photorealistic Rendering and Animation (NPAR) 2007, August 2007, pages 63-70. • Directing Gaze in 3D Models with Stylized Focus. Forrester Cole, Doug DeCarlo, Adam Finkelstein, Kenrick Kin, Keith Morley, Anthony Santella, In Proceedings of the 17th Eurographics Symposium on Rendering (EGSR), June 2006, pages 377-387. • Gaze-Based Interaction for Semi-Automatic Photo Cropping. Anthony Santella, Maneesh Agrawala, Doug DeCarlo, David Salesin, and Michael Cohen, In Proceedings of the 2006 Conference on Human Factors in Computing Systems (CHI), April 2006, pages 771-780. • Interactive Rendering of Suggestive Contours with Temporal Coherence. Doug DeCarlo, Adam Finkelstein, and Szymon Rusinkiewicz, In International Symposium on Non-Photorealistic Rendering and Animation (NPAR) 2004, June 2004, pages 15–24. • Visual Interest and NPR: an Evaluation and Manifesto. Anthony Santella and Doug DeCarlo, In International Symposium on Non-Photorealistic Rendering and Animation (NPAR) 2004, June 2004, pages 71–78. • Robust Clustering of Eye Movement Recordings for Quantification of Visual Interest. Anthony Santella and Doug DeCarlo, In Eye Tracking Research and Applications (ETRA) 2004, March 2004, pages 27–34. • Crafting the Illusion of Meaning: Template-based Generation of Embodied Conversational Behavior. Matthew Stone and Doug DeCarlo, In 16th International Conference on Computer Animation and Social Agents (CASA) 2003, May 2003, pages 11–16. • Towards Real-time Cue Integration by Using Partial Results. Doug DeCarlo, In 7th European Conference on Computer Vision (ECCV) 2002, May 2002, pages 327–342. • Abstracted Painterly Renderings Using Eye-Tracking Data. Anthony Santella and Doug DeCarlo, In International Symposium on Non-Photorealistic Animation and Rendering (NPAR) 2002, June 2002, pages 75–82. • Making Discourse Visible: Coding and Animating Conversational Facial Displays. Doug DeCarlo, Matthew Stone, Corey Revilla and Jennifer Venditti, In Computer Animation 2002, June 2002, pages 11–16. • Top-down and Bottom-up Processes in 3-D Face Perception: Psychophysics and Computational Model. Thomas Papathomas and Doug DeCarlo, In Proceedings of the European Conference on Visual Perception (ECVP) 1999, Perception, vol 28 supplement (abstracts), August 1999, pages 112–113. • Combining Information using Hard Constraints. Doug DeCarlo and Dimitris Metaxas, In Proceedings of IEEE Computer Vision and Pattern Recognition (CVPR) 1999, July 1999, pages 132–138. • An Anthropometric Face Model using Variational Techniques. Doug DeCarlo, Dimitris Metaxas and Matthew Stone, In Proceedings of ACM SIGGRAPH 1998, August 1998, pages 67–74. • Deformable Model-Based Shape and Motion Analysis from Images using Motion Residual Error. Doug DeCarlo and Dimitris Metaxas, In Proceedings of the 6th IEEE International Conference on Computer Vision (ICCV) 1998, January 1998, pages 113–119. • Deformable Model-Based Face Shape and Motion Estimation. Doug DeCarlo and Dimitris Metaxas, In International Conference on Automatic Face and Gesture Recognition, October 1996, pages 146–150. • The Integration of Optical Flow and Deformable Models with Applications to Human Face Shape and Motion Estimation. Doug DeCarlo and Dimitris Metaxas, In Proceedings of IEEE Computer Vision and Pattern Recognition (CVPR) 1996, June 1996, pages 231–238. • Topological Evolution of Surfaces. Doug DeCarlo and Jean Gallier, In Graphics Interface 1996, May 1996, pages 194–203. • Adaptive Shape Evolution Using Blending. Doug DeCarlo and Dimitris Metaxas, In Proceedings of the 5th IEEE International Conference on Computer Vision (ICCV) 1995, June 1995, pages 834–839. • Integrating Anatomy and Physiology for Behavior Modeling. Doug DeCarlo, Jonathan Kaye, Dimitris Metaxas, J.R. Clarke, Bonnie Webber and Norm Badler, In Interactive Technology and the New Paradigm for Healthcare (Medicine Meets Virtual Reality 1995), IOS Press and Ohmsha, 1995, pages 81–87. • Blended Deformable Models. Doug DeCarlo and Dimitris Metaxas, In Proceedings of IEEE Computer Vision and Pattern Recognition (CVPR) 1994, June 1994, pages 566–572. COURSES AND COURSE NOTES • “Line Drawings from 3D Models”, half-day course at SIGGRAPH 2005–course #7 (with slides, course notes and source code). Co-taught with Szymon Rusinkiewicz and Adam Finkelstein. EXTERNAL SUPPORT • SGER: Perceptually-inspired Algorithms for Shape Processing and Abstraction. Doug DeCarlo, PI. NSF CCF #0741801, $60,000, September 2007, 1 year. • Depiction and Perception of Shape in Line Drawings. Doug DeCarlo, PI. Matthew Stone, Manish Singh (Psychology), Co-PIs. NSF CCF #0541185, $300,000, October 2006, 3 years. • Electronic Arts, gift, (the source code of NHL 2004, valued at $800,000), June 2005. • Making Discourse Visible: Realizing Conversational Facial Displays in Interactive Agents. Matthew Stone, PI; Doug DeCarlo, co-PI. NSF Human Language and Communication #0308121, $411,323 (including REU supplements), September 2003, 3 years. • Evaluating Non-Photorealistic Rendering, Doug DeCarlo, PI. NSF CCF SGER #0227737, $73,116, Septem- ber 2002, 1 year. • MRI: Multisensory Human Interaction Measurement and Synthesis for Computer Graphics and Interactive Virtual Environments. Dinesh Pai, PI; Dimitris Metaxas, Doug DeCarlo, Thu Nguyen, Co-PIs. NSF CNS #0215887, $259,598, July 2002, 3 years. • A Laboratory for Interactive Applications for Computational Vision and Language, Sven Dickinson, PI; Suzanne Stevenson, Matthew Stone, and Douglas DeCarlo, co-PIs. CISE Research Instrumentation #9818322, $76,928 (+ $40,000 matching), January 1999, 3 years. PROFESSIONAL ACTIVITIES Associate Editor, ACM Transactions on Graphics Editorial Board, The Visual Computer Conference
Recommended publications
  • Outlier Rejection in High-Dimensional Deformable Models
    Image and Vision Computing 25 (2007) 274–284 www.elsevier.com/locate/imavis Outlier rejection in high-dimensional deformable models Christian Vogler a,*, Siome Goldenstein b, Jorge Stolfi b, Vladimir Pavlovic c, Dimitris Metaxas c a Gallaudet Research Institute, Gallaudet University, 800 Florida Avenue NE, HMB S-433 Washington, DC 20002-3695, USA b Instituto de Computac¸a˜o, Universidade Estadual de Campinas, Caixa Postal 6176, Campinas, SP 13084-971, Brazil c Department of Computer Science, Rutgers University, 110 Frelinghuysen Road, Piscataway, NJ 08854-8019, USA Received 16 October 2004; received in revised form 11 August 2005; accepted 11 October 2005 Abstract Deformable model tracking is a powerful methodology that allows us to track the evolution of high-dimensional parameter vectors from uncalibrated monocular video sequences. The core of the approach consists of using low-level vision algorithms, such as edge trackers or optical flow, to collect a large number of 2D displacements, or motion measurements, at selected model points and mapping them into 3D space with the model Jacobians. However, the low-level algorithms are prone to errors and outliers, which can skew the entire tracking procedure if left unchecked. There are several known techniques in the literature, such as RANSAC, that can find and reject outliers. Unfortunately, these approaches are not easily mapped into the deformable model tracking framework, where there is no closed-form algebraic mapping from samples to the underlying parameter space. In this paper, we present three simple, yet effective ways to find the outliers. We validate and compare these approaches in an 11- parameter deformable face tracking application against ground truth data.
    [Show full text]
  • Texture Resampling While Ray-Tracing: Approximating the Convolution Region Using Caching
    University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science February 1994 Texture Resampling While Ray-Tracing: Approximating the Convolution Region Using Caching Jeffry S. Nimeroff University of Pennsylvania Norman I. Badler University of Pennsylvania Dimitris Metaxas University of Pennsylvania Follow this and additional works at: https://repository.upenn.edu/cis_reports Recommended Citation Jeffry S. Nimeroff, Norman I. Badler, and Dimitris Metaxas, "Texture Resampling While Ray-Tracing: Approximating the Convolution Region Using Caching", . February 1994. University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-94-03. This paper is posted at ScholarlyCommons. https://repository.upenn.edu/cis_reports/243 For more information, please contact [email protected]. Texture Resampling While Ray-Tracing: Approximating the Convolution Region Using Caching Abstract We present a cache-based approach to handling the difficult oblempr of performing visually acceptable texture resampling/filtering while ar y-tracing. While many good methods have been proposed to handle the error introduced by the ray-tracing algorithm when sampling in screen space, handling this error in texture space has been less adequately addressed. Our solution is to introduce the Convolution Mask Approximation Module (CMAM). The CMAM locally approximates the convolution region in texture space as a set of overlapping texture triangles by using a texture sample caching system and ray tagging. Since the caching mechanism is hidden within the CMAM, the ray-tracing algorithm itself is unchanged while achieving an adequate level of texture filtering (area sampling as opposed to point sampling/interpolation in texture space). The CMAM is easily adapted to incorporate prefiltering methods such as MIP mapping and summed-area tables as well as direct convolution methods such as elliptical weighted average filtering.
    [Show full text]
  • 2015 HRP Annual Report
    National Aeronautics and Space Administration HUMAN RESEARCH PROGRAM SEEKING KNOWLEDGE FY2015 ANNUAL REPORT TABLE OF CONTENTS Human Research Program Overview ................................................................... 01 }}Background }}Goal and Objectives }}Program Organization }}Partnerships and Collaborations Major Program-Wide Accomplishments ...............................................................07 International Space Station Medical Projects Element .......................................... 13 Space Radiation Element ...................................................................................... 25 Human Health Countermeasures Element ........................................................... 31 }}Vision and Cardiovascular Portfolio }}Exercise and Performance Portfolio }}Multisystem Portfolio }}Bone and Occupant Protection Portfolio }}Technology and Infrastructure Portfolio Exploration Medical Capability Element .............................................................. 45 Space Human Factors and Habitability Element .................................................. 53 }}Advanced Environmental Health Portfolio }}Advanced Food Technology Portfolio }}Space Human Factors Engineering Portfolio Behavioral Health and Performance Element ....................................................... 63 }}Sleep/Fatigue Portfolio }}Behavioral Medicine Portfolio }}Team Risk Portfolio Engagement and Communications ....................................................................... 71 Publications ..........................................................................................................77
    [Show full text]
  • Towards Efficient U-Nets: a Coupled and Quantized Approach
    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2019.2907634, IEEE Transactions on Pattern Analysis and Machine Intelligence TOWARDS EFFICIENT U-NET: A COUPLED AND QUANTIZED APPROACH 1 Towards Efficient U-Nets: A Coupled and Quantized Approach Zhiqiang Tang, Xi Peng, Kang Li and Dimitris N. Metaxas, Fellow, IEEE Abstract—In this paper, we propose to couple stacked U-Nets for efficient visual landmark localization. The key idea is to globally reuse features of the same semantic meanings across the stacked U-Nets. The feature reuse makes each U-Net light-weighted. Specially, we propose an order-K coupling design to trim off long-distance shortcuts, together with an iterative refinement and memory sharing mechanism. To further improve the efficiency, we quantize the parameters, intermediate features, and gradients of the coupled U-Nets to low bit-width numbers. We validate our approach in two tasks: human pose estimation and facial landmark localization. The results show that our approach achieves state-of-the-art localization accuracy but using ∼70% fewer parameters, ∼30% less inference time, ∼98% less model size, and saving ∼75% training memory compared with benchmark localizers. Index Terms—Stacked U-Nets, Dense Connectivity, Network Quantization, Efficient AI, Human Pose Estimation, Face Alignment. F 1 INTRODUCTION the stacked U-Nets, generating the coupled U-Nets (CU- The U-Net architecture [1] is a basic category of Convolution Net). The key idea is to directly connect blocks of the same Neural Network (CNN).
    [Show full text]
  • Mingchen, Gao CV
    Mingchen Gao 347 Davis Hall Phone: (716)-645-2834 University at Buffalo Email: [email protected] Buffalo NY 14260 https://cse.buffalo.edu/~mgao8/ RESEARCH INTERESTS Big Healthcare Data, Biomedical Imaging Informatics, Machine Learning, Computer Vision EDUCATION 09/2007 { 06/2014 Rutgers University, New Brunswick, NJ Ph.D. in Computer Science Thesis: Cardiac Reconstruction and Analysis from High Resolution CT Images Advisor: Prof. Dimitris Metaxas 09/2003 { 06/2007 Southeast University, Nanjing, China Chien-Shiung Wu College (Gifted Young Class) B.E. in Computer Science and Engineering EMPLOYMENTS 08/2017 { present Assistant Professor, Department of Computer Science and Engineering University at Buffalo, SUNY 09/2014 { 08/2017 Postdoctoral Fellow, Center for Infectious Disease Imaging Department of Radiology and Imaging Sciences, Clinical Center National Institutes of Health (NIH), Bethesda, MD 07/2009 { 06/2014 Research Assistant, Rutgers University, Computer Science Department New Brunswick, NJ 03/2013 { 08/2013 Research Intern, Siemens Medical Solution, Malvern, PA 05/2012 { 08/2012 Software Engineer Intern, Google Inc., Image Search Group, Mountain View, CA 09/2007 { 06/2009 Teaching Assistant, Rutgers University, Computer Science Department, New Brunswick, NJ FUNDING Artificial Intelligence Germination Space Program, University at Buffalo, SUNY 09/01/18-08/31/19 AI Drug Discovery to Characterise and Treat Every Disease $25,000 Role: Co-PI Mingchen Gao 2 STUDENT ADVISING PhD Students Yan Shen 01/2018 - present Chunwei Ma 08/2018 - present
    [Show full text]
  • Non-Linear Dynamical System Approach to Behavior Modeling
    1 Introduction The importance of game and simulation applications grows everyday, as does the need for animated agents Non-linear dynamical that operate autonomously in these environments. These agents must be able to exhibit certain low- and system approach to high-level behaviors and interact with the environ- behavior modeling ment. Various approaches for modeling behavior and movement decisions have been investigated, among them the pioneering work of Reynolds (Reynolds 1987) and that of others (Haumann and Parent 1988; Wilhelms 1990; Renault et al. 1990; Bates et al. Siome Goldenstein, 1992; Noser and Thalmann 1993; Reynolds 1993; Edward Large, Ko et al. 1994; Tu and Terzopoulos 1994; Noser et al. Dimitris Metaxas 1995). These include, kinematic, dynamic, learning, and artificial intelligence-based approaches. AI techniques for behavior generation, such as (Hau- VAST Lab., Center for Human Modeling and mann and Parent 1988; Lethebridge and Ware 1989), Simulation, Computer and Information Science Department, University of Pennsylvania, 200 South generally require complex inferencing mechanisms. 33rd Street, Philadelphia, PA, USA 19104-6389 This may require considerable computational re- e-mail: {siome,large,dnm}@graphics.cis.upenn.edu sources, raising the question of scalability of such systems, as the number of independent agents and behaviors grows, and each agent has a different set of goals and behavioral directives. In addition, agents We present a dynamic systems approach to must be able to interact with real-time moving ob- modeling and generating low-level behav- jects that might either contribute to or compromise iors for autonomous agents. Such behaviors the final goal. Other approaches to this problem em- include real-time target tracking and obsta- ploy learning, perception, and dynamic techniques cle avoidance in time-varying environments.
    [Show full text]
  • Athens October 17-21 2016 INTERCONTINENTAL ATHENAEUM ATHENS / GREECE
    MICCAITH 2016 19 INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING & GREECE COMPUTER ASSISTED INTERVENTION Athens October 17-21 2016 INTERCONTINENTAL ATHENAEUM ATHENS / GREECE www.miccai2016.org PROGRAM 2016 Images are courtesy of: A. Leemans, PROVIDI Lab, UMC Utrecht, The Netherlands; E. Keeve, Charite Berlin; CASMIP Lab, The Hebrew U. of Jerusalem, Israel; ©CISTIB, Univ. of Sheffield, UK; CAMMA, ICube., Univ. of Strasbourg, France; Medical Image Analysis, ISTB, Univ. Bern, Switzerland 19TH INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING & COMPUTER ASSISTED INTERVENTION October 17-21 2016 INTERCONTINENTAL ATHENAEUM CONTENT Page WELCOME 2 PROGRAM OVERVIEW 4 GENERAL INFORMATION 5 FLOOR PLANS 8 SPONSORS 11 SPECIAL AND SOCIAL EVENTS 12 GALA DINNER 14 KEYNOTES 15 CONFERENCE PROGRAM 18 POSTER SESSIONS 24 17 OCTOBER SATELLITE EVENTS 48 21 OCTOBER SATELLITE EVENTS 78 MICCAI BOARD 104 MICCAI 2016 ORGANIZATION COMMITEE 105 MICCAI 2016 PROGRAM COMMITEE 106 19TH INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING & COMPUTER ASSISTED INTERVENTION October 17-21 2016 INTERCONTINENTAL ATHENAEUM WELCOME Welcome to MICCAI 2016 in Athens! It is a privilege and honour for the organisation team to be your host this year at the Intercontinental Athenaeum Hotel in Athens. MICCAI is an outstanding platform to promote, foster and disseminate our research in a very collegial and friendly atmosphere, which we all enjoy so much. We very much hope that we will provide you yet another memorable MICCAI this year as well. MICCAI 2016 received 756 submissions, from which we selected 228 papers for publication. We are very grateful to our Program Committee members and our reviewers who are at the core of the stringent selection process.
    [Show full text]
  • Curriculum Vitae Junzhou Huang
    Curriculum Vitae Junzhou Huang Scalable Modeling & Imaging & Learning Lab Mobile: (732)853-4673 Dept. Computer Science & Engineering Office: (817)272-9596 The University of Texas at Arlington Email: [email protected] 500 UTA Boulevard, Arlington, TX 76019 http://ranger.uta.edu/~huang Research Interest • Machine Learning, Data Mining, Computer Vision, Medical Imaging Informatics Education • Rutgers, the State University of New Jersey, New Brunswick, NJ, USA Ph.D. in Computer Science, June 2011 Thesis: Structured Sparsity: Theorems, Algorithms and Applications Advisor: Dr. Dimitris Metaxas and Dr. Tong Zhang • University of Notre Dame, Notre Dame, IN, USA Ph.D. Candidate in Computer Science and Engineering, 2004 - 2005 • Institute of Automation, Chinese Academy of Sciences, Beijing, China M.S. in Pattern Recognition and Intelligent Systems, June 2003 Thesis: Super Resolution Based Iris Image Enhancement Advisor: Dr. Tieniu Tan • Huazhong University of Science and Technology, Wuhan, China B.E. in Control Science and Engineering, June 1996 Professional Experience • The University of Texas at Arlington 2021 - now, Professor in Computer Science and Engineering 2016 - 2021, Associate Professor in Computer Science and Engineering 2011 - 2016, Assistant Professor in Computer Science and Engineering • Rutgers University 2005 - 2011, Research Assistant in Computer Science • Chinese Academy of Science 2000 - 2004, Research Assistant in the National Lab of Pattern Recognition • Industrial and Commercial Bank of China 1996 - 2000, Software Engineer Award and Honor • 6th Place Winner, the 3D Structure Prediction Challenge in the 14th Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction (CASP14), December 2020. (AlphaFold2 from Deepmind won the 1st place), • 1st Place Winner, the Contact and Distance Prediction Challenge in the 14th Com- munity Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction (CASP14), December 2020.
    [Show full text]
  • Sparse Methods for Computer Vision, Graphics and Medical Imaging
    Sparse methods for computer vision, graphics and medical imaging A Special Track of the 9th International Symposium on Visual Computing (ISVC’13) http://www.isvc.net July 29-31, 2013, Crete, Greece Scope: Sparsity-based compressive sensing and sparse learning have been widely investigated and applied in machine learning, computer vision, computer graphics and medical imaging. The goal of this special track is to publish novel theory, algorithms and applications on sparse methods for applications in these areas. It will foster dialogue and debate in this relatively new field, which includes Compressive Sensing (CS), Sparse Learning (SL) and their applications to these areas. Therefore, the proposed Special Track will gather researchers representing the various fields of machine learning, computer Vision, computer graphics, medical image analysis, mathematics and statistics. This track attempts to establish a bridge between researchers from these diverse fields. The track issue will consist of previously unpublished papers according to ISVC guidelines. Topics: The topics of interest include but are not limited to the following areas: • Compressive sensing MR and other medical imaging • Sparse representation and its applications • Dictionary learning techniques and their applications • Structured and group sparsity • Statistical analysis • Convex optimization based on sparsity priors • General applications in computer vision, such as face recognition, emotion recognition, sign language recognition, super-resolution, tracking • General applications in graphics, such as geometry processing, modeling and simulation • General applications in medical image analysis, such as segmentation, registration, modeling, detection, etc. Paper Submission Procedure: Papers submitted to ISVC 2013 Special Track must not have been previously published and must not be currently under consideration for publication elsewhere.
    [Show full text]
  • Shaoting Zhang Computer Science [email protected] UNC Charlotte Assistant Professor Charlotte, NC, 28223
    Shaoting Zhang Computer Science [email protected] UNC Charlotte http://webpages.uncc.edu/~szhang16/ Assistant Professor Charlotte, NC, 28223 RESEARCH INTERESTS Large-scale and robust data analytics, medical informatics, computer vision and machine learning EDUCATION Rutgers University, Piscataway, NJ 2007/09 – 2012/01 Ph.D. in Computer Science Advisor: Dr. Dimitris N. Metaxas Shanghai Jiao Tong University, Shanghai, China 2005 – 2007 M.S. in Computer Software and Theory Advisor: Dr. Lixu Gu. Thesis co-advisor: Matthias Harders (ETH) Zhejiang University, Hangzhou, China 2001 – 2005 B.S. in Software Engineering EMPLOYMENT University of North Carolina at Charlotte, NC 2013/08 – present Assistant Professor Department of Computer Science Rutgers University, Piscataway, NJ 2012/02 – 2013/08 Research Assistant Professor Department of Computer Science Center for Computational Biomedicine Imaging and Modeling (CBIM) Center for Dynamic Data Analytics (CDDA) Rutgers University, Piscataway, NJ 2007/09 – 2012/01 Research and Teaching Assistant Department of Computer Science, CBIM NEC Laboratories America, Cupertino, CA 2011/06 – 2011/09 Research Intern Siemens Healthcare, Malvern, PA 2010/05 – 2010/08 Research Intern Shanghai Jiao Tong University, Shanghai, China 2005/01 – 2007/07 Research Assistant Lab of Image Guided Surgery and Therapy AWARDS 1. The 3rd Most Cited Paper in Medical Image Analysis since 2011 (the first author) 2016 http://www.journals.elsevier.com/medical-image-analysis/most-cited-articles 2. MICCAI Young Scientist Award in MICCAI’15 (the corresponding author) The 18th Conference on Medical Image Computing and Computer Assisted Intervention Received by my co-advised PhD student Menglin Jiang, 4 awardees out of 810 submissions 2015 3. Finalist of Best Student Paper Awards in IEEE ISBI’15 (the corresponding author) Received by my PhD student Xiaofan Zhang.
    [Show full text]
  • Agenda Final IMG10 V2
    What Can Computer Vision Do for Neuroscience and Vice Versa? Sunday November 14th 3:00 pm Check-in 6:00 pm Reception (Lobby) 7:00 pm Dinner 8:00 pm Session 1: Computer Vision at JFRC I Chair: Eugene Myers 8:00 pm Eugene Myers, Janelia Farm Research Campus/HHMI Introductory remarks 8:10 pm Dmitri Chklovskii, Janelia Farm Research Campus/HHMI Super-resolution reconstruction of brain structure using sparse redundant representations 8:40 pm Hanchuan Peng, Janelia Farm Research Campus/HHMI Seeing more is knowing more 9:10 pm Refreshments available at Bob’s Pub 11/7/2010 1 What Can Computer Vision Do for Neuroscience and Vice Versa? Monday November 15th 7:30 am Breakfast 9:00 am Session 2: Feature Learning Chair: Dmitri Chklovskii 9:00 am Jean Ponce, Ecole Normale Supérieure Sparse coding and dictionary learning for image analysis 9:30 am Yann LeCun, New York University Should artificial vision use the same building blocks as natural vision? 10:00 am Break 10:30 am Session 3: Brain Image Segmentation and Analysis Chair: Hanchuan Peng 10:30 am Demetri Terzopoulos, University of California, Los Angeles Linear and multilinear PCA for automatic brain image segmentation 11:00 am Sebastian Seung, HHMI/Massachusetts Institute of Technology Learning to segment images by optimizing error metrics that are sensitive to topological disagreements 11:30 am Tolga Tasdizen, University of Utah Cell membrane detection in electron microscopy 12:00 pm Rene Vidal, Johns Hopkins University Processing high angular resolution diffusion images of the brain 12:30 pm
    [Show full text]
  • Curriculum Vitae
    CURRICULUM VITAE Vassilis Athitsos Computer Science and Engineering Department Phone: 817-272-0155 University of Texas at Arlington Fax: 817-272-3784 500 UTA Boulevard, Room 623 E-mail: [email protected] Arlington, Texas 76019 Web: http://vlm1.uta.edu/~athitsos RESEARCH INTERESTS Computer Vision, Data Mining, Machine Learning, Gesture and Sign Language Recognition. EDUCATION Boston University, Ph.D. in Computer Science, 2006. University of Chicago, M.S. in Computer Science, 1997. University of Chicago, B.S. in Mathematics, 1995. PROFESSIONAL APPOINTMENTS Sep. 2018 – present: Professor, Computer Science and Engineering Department, University of Texas at Arlington. Sep. 2012 – Aug. 2018: Associate Professor, Computer Science and Engineering Department, University of Texas at Arlington. Aug. 2007 – Aug. 2012. Assistant Professor, Computer Science and Engineering Department, University of Texas at Arlington. Oct. 2006 – Jul. 2007. Postdoctoral Researcher, Computer Science Department, Boston University. Aug 2005 – Sep. 2006. Member of Technical Staff, Siemens Corporate Research. Princeton, New Jersey. Apr. 1998 – Jul. 2005. Research Assistant, Computer Science Department, Boston University. Oct. 1995 – Mar. 1998. Research and Teaching Assistant, Computer Science Department, University of Chicago. HONORS AND AWARDS • Best Student Paper Award, in International Conference on Intelligent User Interfaces (IUI), March 2017. The conference had 272 submissions, and 63 accepted papers. • Outstanding Early Career Award, College of Engineering, University of Texas at Arlington, February 2013. • NSF CAREER Award, 2010. • Outstanding Reviewer Award, CVPR 2010. • Best Paper Award, in IEEE Workshop on Computer Vision and Pattern Recognition for Human Communicative Behavior Analysis (CVPR4HB), June 2008. The workshop had 29 submitted papers. • Best Paper Award, in International Conference on Document Analysis and Retrieval (ICDAR), August 2005.
    [Show full text]