Speaker Bios
Total Page:16
File Type:pdf, Size:1020Kb
2016 AIPR: Keynote and Invited Talks Schedule Tuesday, October 18 - Deep Learning & Artificial Intelligence 9:00 AM - 9:45 AM - Jason Matheny, Director, IARPA 11:25 AM - Noon - Trevor Darrell. EECS, University of California-Berkeley 1:30 PM - 2:15 PM - Christopher Rigano, Office of Science and Technology, National Institute of Justice 3:30 PM - 4:05 PM - John Kaufhold, Deep Learning Analytics Wednesday, October 19 - HPC & Biomedical 8:45 AM - 9:30 AM - Richard Linderman, SES, Office of the Assistant Secretary of Defense, Research and Engineering 11:45 AM - 12:30 AM - Vijayakumar Bhagavatula, Associate Dean, Carnegie Mellon University 2:00 PM - 2:45 PM - Patricia Brennan, Director, National Library of Medicine, National Institutes of Health 4:00 PM - 4:45 PM - Nick Petrick, Center for Devices and Radiological Health, Food and Drug Administration 7:15 PM - 9:15 PM - Terry Sejnowski, Salk Institute & UCSD, Banquet Speaker - Deep Learning II Thursday, October 20 - Big 3D Data & Image Quality 8:45 AM - 9:30 AM - Joe Mundy, Brown University & Vision Systems 1:00 PM - 1:45 PM - Steven Brumby, Descartes Labs Biographies and Abstracts (where available) Banquet Talk: Deep Learning II Deep Learning is based on the architecture of the primate visual system, which has a hierarchy of visual maps with increasingly abstract representations. This architecture is based on what we knew about the properties of neurons in the visual cortex in the 1960s. The next generation of deep learning based on more recent understanding of the visual system will be more energy efficient and have much higher temporal resolution. Speaker Bio: Dr. Terrence Sejnowski received his Ph.D. in physics from Princeton University. He was a postdoctoral fellow at Princeton University and the Harvard Medical School. He served on the faculty of Johns Hopkins University and was a Wiersma Visiting Professor of Neurobiology and a Sherman Fairchild Distinguished Scholar at Caltech. He is now an Investigator with the Howard Hughes Medical Institute and holds the Francis Crick Chair at The Salk Institute for Biological Studies. He is also a Professor of Biology at the University of California, San Diego, where he is co-director of the Institute for Neural Computation and co-director of the NSF Temporal Dynamics of Learning Center. He is a pioneer in computational neuroscience and his goal is to understand the principles that link brain to behavior. His laboratory uses both experimental and modeling techniques to study the biophysical properties of synapses and neurons and the population dynamics of large networks of neurons. New computational models and new analytical tools have been developed to understand how the brain represents the world and how new representations are formed through learning algorithms for changing the synaptic strengths of connections between neurons. He has published over 500 scientific papers and 12 books, including The Computational Brain, with Patricia Churchland. Sejnowski is the President of the Neural Information Processing Systems (NIPS) Foundation, which organizes an annual conference attended by over 2000 researchers in machine learning and neural computation and is the founding editor-in-chief of Neural Computation published by the MIT Press. He is a member of the Institute of Medicine, National Academy of Sciences and the National Academy of Engineering, one of only ten current scientists elected to all three national academies. Keynote Talk: Value of Machine Learning for National Security, Jason Matheny, Director IARPA: This talk will describe how machine learning has affected a range of national security missions, the value of past research, and priorities for future research. Speaker Bio: Dr. Jason Matheny is Director of the Intelligence Advanced Research Projects Activity (IARPA), a U.S. Government organization that invests in high-risk, high-payoff research in support of national intelligence. Before IARPA, he worked at Oxford University, the World Bank, the Applied Physics Laboratory, the Center for Biosecurity and Princeton University, and is the co-founder of two biotechnology companies. His research has been published in Nature, Nature Biotechnology, Biosecurity and Bioterrorism, Clinical Pharmacology and Therapeutics, Risk Analysis, Tissue Engineering, and the World Health Organization’s Disease Control Priorities, among others. Dr. Matheny holds a PhD in applied economics from Johns Hopkins, an MPH from Johns Hopkins, an MBA from Duke, and a BA from the University of Chicago. He received the Intelligence Community’s Award for Individual Achievement in Science and Technology. Invited Talk: Perceptual representation learning across diverse modalities and domains, Trevor Darrell, University of California, Berkeley: Learning of layered or "deep" representations has provided significant advances in computer vision in recent years, but has traditionally been limited to fully supervised settings with very large amounts of training data. New results show that such methods can also excel when learning in sparse/weakly labeled settings across modalities and domains. I'll review state-of-the-art models for fully convolutional pixel-dense segmentation from weakly labeled input, and will discuss new methods for adapting deep recognition models to new domains with few or no target labels for categories of interest. As time permits, I'll present recent results on long-term recurrent network models that can learn cross-modal descriptions and explanations. Speaker Bio: Prof. Darrell is on the faculty of the CS Division of the EECS Department at UC Berkeley and he is also appointed at the UC-affiliated International Computer Science Institute (ICSI). Darrell’s group develops algorithms for large-scale perceptual learning, including object and activity recognition and detection, for a variety of applications including multimodal interaction with robots and mobile devices. His interests include computer vision, machine learning, computer graphics, and perception- based human computer interfaces. Prof. Darrell was previously on the faculty of the MIT EECS department from 1999-2008, where he directed the Vision Interface Group. He was a member of the research staff at Interval Research Corporation from 1996-1999, and received the S.M., and PhD. degrees from MIT in 1992 and 1996, respectively. He obtained the B.S.E. degree from the University of Pennsylvania in 1988, having started his career in computer vision as an undergraduate researcher in Ruzena Bajcsy's GRASP lab. Speaker Bio: Chris Rigano serves as a senior computer scientist at the National Institute of Justice, Office of Justice Programs, US. Department of Justice and is responsible for advancing the science of person-based analytics for the federal, state, local, and tribal criminal justice communities. His work includes research in image and video analytics, biometrics and social network analysis. Prior to this assignment, Mr. Rigano worked exploratory analytics as a contractor for the intelligence community. Mr. Rigano holds MS degrees in Computer Science from North Carolina State University and in Software Engineering from Monmouth University. He also holds a BS degree in Computer Science from Monmouth University and a BA degree in Criminal Justice from Iona College. He served in the US Army reaching the rank of Captain. His experience includes working as technical direction agent at the Office of Naval Research for the MITRE Corporation. His career spans multiple research areas in computer communications cyberspace, social network and media analytics. Invited Talk: Deep Learning Past, Present and Near Future In the past 5 years, deep learning has become one of the hottest topics at the intersection of data science, society, and business. Google, Facebook, Microsoft, Baidu and other companies have embraced the technology and in domain after domain, deep learning is outperforming both people and competing algorithms at practical tasks. ImageNet Hit@5 object recognition error rates have fallen >85% since 2011 and now can recognize 1,000 different objects in photos faster and better than you can. All major speech recognition engines (Google’s, Baidu’s, Apple Siri, etc.) now use deep learning. In real time, deep learning can automatically translate a speaker’s voice in one language to the same voice speaking another language. Deep learning can now beat you at Atari and Go. These breakthroughs are visible as both product offerings as well as competitive results on international open benchmarks. This recent disruptive history of deep learning has led to a student and startup stampede to master key elements of the technology—and this landscape is evolving rapidly. The abundance of open data, Moore’s law, Koomey’s law, Dennard scaling, an open culture of innovation, a number of key algorithmic breakthroughs in deep learning, and a unique investment at the intersection of hardware and software have all converged as factors contributing to deep learning’s recent disruptive successes. And continued miniaturization in the direction of internet-connected devices in the form of the “Internet of Things” promises to flood sensor data across new problem domains to an already large, innovative, furiously active, and well resourced community of practice. But with disruptive AI technologies come apprehension—we now enjoy deep learning benefits like Siri every day, but privacy concerns, economic dislocation, anxieties about self driving cars, and military drones all loom on the horizon and our legal system