November 2017

Total Page:16

File Type:pdf, Size:1020Kb

November 2017 The magazine of the algorithm community November 2017 Can we see around corners? - Katie Bouman Upcoming Events: AI EXPO BEST OF ICCV: 33 pages!!! Presentations, People and Technology: Women in Computer Vision: Raquel Urtasun, Georgia Gkioxari and … Laura Leal-Taixé Vicky Kalogeiton Spotlight News Project Management: Iffy If's by Ron Soferman We Tried for You: Use Recurrent Neural Networks with Attention Image Processing: Bones Segmentation from CT Scans Review of Research Paper by Berkeley: Unpaired Image-to-Image Translation… A publication by 2 Read This Month Computer Vision News Tutorial - FAIR and Spotlight News We Tried for You Georgia Gkioxari RNN with Attention 37 04 50 Research of the month Project TorontoCity by Berkeley Bones Segmentation Raquel Urtasun - UofT 08 38 Can we see aroundcorners? 55 Katie L. Bouman - MIT Project Management by Ron Soferman Upcoming Events AI Expo NA 10 48 56 04 BEST OF ICCV Daily 2017 37 Spotlight News Tutorial, with Georgia Gkioxari From elsewhere on the Web Presentations: 38 Review of Research Paper TorontoCity, with Raquel Urtasun Unpaired Image-to-Image Turning Corners Into Cameras, K. Bouman Translation using Cycle-Consistent Focal Track, with Qi Guo Adversarial Networks - by A. Spanier SceneNet RGB-D, with John McCormac 48 Project Management Detect to Track, with C. Feichtenhofer Iffy If's - by Ron Soferman Weakly-Supervised Learning, Julia Peyre We Tried for You Active Learning for Human Pose Est., B. Liu 50 RNN with Attention - by A. Spanier Women in Science: Project in Computer Vision Laura Leal-Taixé, 55 Bones Segmentation from CT Scans Vicky Kalogeiton 56 Computer Vision Events Interview: AI Expo and Nov-Jan events Cristian Canton Ferrer - Facebook Welcome 3 Computer Vision News Dear reader, This October issue of Computer Vision News is obviously dedicated to the exceptional success of ICCV2017: expecting about 1,800 participants (following the 1,400 attendees of the previous edition), organizers were surprised by about 3,200 registrations, one more proof of the spectacular growth of the computer vision community. RSIP Vision was obviously in the first row at ICCV: first by publishing the very first ICCV Daily; second, by publishing today a very insightful BEST OF ICCV, which you can start reading in the next page. You will find in it short and long testimonies of the most impressive lectures and works, from Jitendra Malik to Michael Black, from Raquel Urtasun to Georgia Gkioxari and more. Among the many inspiring presentations, we dedicate our cover of Computer Vision News November to Katie Bouman of MIT, who taught us how we can see around corners: Editor: intriguing and exciting! Let me thank ICCV (in Ralph Anzarouth particular Marcello Pelillo and Nicu Sebe) for partnering with us and letting us cover the Engineering Editor: conference with our brand new ICCV Daily Assaf Spanier publication. Publisher: In addition to ICCV, you will read in this RSIP Vision November issue of Computer Vision News many more articles, including: our own Contact us reviews of research and tools; the full preview of AI Expo in the Silicon Valley; the list of Give us feedback upcoming computer vision events; the Free subscription Spotlight News; and more… Read previous magazines Enjoy the reading! Copyright: RSIP Vision All rights reserved Ralph Anzarouth Unauthorized reproduction Marketing Manager, RSIP Vision is strictly forbidden. Editor, Computer Vision News 4 Opening Talk Computer Vision News General Chair Marcello Pelillo during the Opening Talk of ICCV2017. He was kind enough to dedicate one of his slides to the ICCV Daily, the new publication originated by the partnership between ICCV and Computer Vision News, the magazine of the algorithm community published by RSIP Vision. This was the first ICCV Daily ever, made possible by the conference chairs’ resolute will to tool up ICCV with the same daily magazine as CVPR, ECCV, MICCAI and CARS. Once again, this concept born from a rib of Computer Vision News proved extremely popular and successful. Here is probably the right place to thank all those who helped this first ICCV Daily project become a reality: Nicu Sebe, Octavia Camps, Rita Cucchiara and the professional event staff of theoffice.it (in particular Federica and Laura) SpeakersSpeakers 5 Computer Vision News Jitendra Malik at ICCV2017, talking at Beyond Supervised Learning workshop. Quoting Donald Knuth, Jitendra said: “If it works once, it’s a hack; if it works more than once it’s a technique!” Michael Black at ICCV2017, answering questions at the PoseTrack Challenge: Human Pose Estimation and Tracking in the Wild workshop. Michael is now Distinguished Scholar at Amazon. 6 Tutorial - Georgia Gkioxari Computer Vision News Instance-level Visual Recognition Georgia Gkioxari is a postdoctoral researcher at FAIR. She received her PhD from UC Berkeley, where she was advised by Jitendra Malik. She is the organizer of tutorial Instance- level Visual Recognition at ICCV2017. Georgia, you organised a tutorial on Sunday. Can you tell us about it? The tutorial was on instance-level visual recognition, which means that we tried to describe and cover the topics regarding scene understanding and object understanding. Whose initiative was it? I think it was a FAIR initiative from their researchers at Facebook AI Research. However, I was the one leading it, We actually covered a wide variety of organising it, reaching out to speakers, topics. Ross Girshick presented a making sure that everybody has their generalised description of R-CNN for talks ready and they are all in sync. object detection. Later on, Kaiming He covered Mask R-CNN and tried to show Why is it important for us to get into a different perspective of this work. I this subject? covered human-object interactions, Object recognition and scene which is a field that is growing right understanding have been very popular now and is of great interest to the subjects and very popular fields of community. Jifeng Dai covered video study in computer vision over a span of understanding. Last but not least, 30 years or more. It is very important Justin Johnson tried to go beyond to always keep up-to-date with the those topics and cover visual recent and best methods out there, relationships as well as reasoning. and always try to make it clear to the audience, even if they are not “I would like to see video specialists in this field, to understand understanding take off” what is going on. What recent findings in this area were Computer Vision News already people able to learn about at the reviewed outstanding work by Georgia tutorial? Gkioxari and FAIR. Read it here Tutorial - Georgia Gkioxari 7 Computer Vision News Computer Vision News It seems that Facebook is getting the lead in this kind of subject. Is that right to say and if so, why is that? I think that is a fair statement. I think that FAIR has… [we laugh at the unintended pun] Yes, FAIR with big letters… Exactly! I think that FAIR has some of the best researchers in the field of object recognition and scene understanding, with people such as Ross, Kaiming, Piotr Dollár. As well as others, such as Laurens Van der Maaten, Rob Fergus, and so on. It is definitely a group of very good scientists that are experts in this, but this is not just what they can do. They can research and make progress in a lot of fields that are related to object recognition, but not only. That is a good question and it is a hard one, because I think the fields that we “It would be great if we have not made a breakthrough in are plenty. I would identify two. I would could find more effective like to see video understanding take ways of learning through off. I would like us to be able to understand videos better. Not just data, and use less and less through better datasets, but also labelled data to achieve through more efficient and effective methods. The other direction that I the same performance” think we have not seen a lot of What findings would you like to see in progress in is unsupervised learning. the next couple of years? Currently, we are very good at learning and training systems with millions and millions of labelled data. However, it would be great if we could find more effective ways of learning through data, and use less and less labelled data to achieve the same performance. “I covered human-object interactions, which is a field that is growing right now and is of great interest to the community” 8 TorontoCity Computer Vision News TorontoCity: Seeing the World With a Million Eyes Min Bai and Shenlong Wang are both PhD students at the University of Toronto, supervised by Professor Raquel Urtasun. All are part of the Uber Advanced Technologies Group (ATG) in Toronto, Canada (managed by Raquel). We spoke to Min, Shenlong and Raquel ahead of their poster today, which is co-authored with Gellért Máttyus, Hang Chu, Wenjie Luo, Bin Yang, Justin Liang, Joel Cheverie, and Sanja Fidler. From left: Min, Shenlong and Raquel Their work is about a super large-scale allows people to train machine learning dataset, captured from different models that didn’t exist before. perspectives and with all kinds of Shenlong says that a motivation for this different sensors. From top-down view work is that they would like the and ground level, and with LIDAR and community to realise the importance RGB camera. The key is to annotate the of mapping. He says that mapping is a ground truth with existing high- very important problem and there is definition maps. not such a good benchmark to Human annotation is expensive.
Recommended publications
  • Photographing Black Holes with the Event Horizon Telescope - Past, Present and Future
    Photographing Black Holes with the Event Horizon Telescope - Past, Present and Future - Kazu Akiyama MIT Haystack Observatory The Shadow of a Black Hole BH ~5.2 Rs Rs = 2GMBH/c2 (Hilbert 1916) Credit: Hung-Yi Pu Kazu Akiyama, NEROC Symposium 2020, Online, 2020/11/17 (Tue) Event Horizon Telescope Kazu Akiyama, NEROC Symposium 2020, Online, 2020/11/17 (Tue) Event Horizon Telescope Sgr A* EHT 50μas Credit: Hotaka Shiokawa M87 EHT 40μas Credit: Monika Moscibrodzka Kazu Akiyama, NEROC Symposium 2020, Online, 2020/11/17 (Tue) Units of the Angular Size Protractor:1 ticks = 1 degree x 1/60 = 1 arcmin x 1/60 = 1 arcsec x 1/1000 = 1 mas x 1/1000 = 1 μas 0.5 deg 30 arcmin 40 - 50 μas Kazu Akiyama, NEROC Symposium 2020, Online, 2020/11/17 (Tue) Event Horizon Telescope Collaboration >300 members, >59 institutes, >18 countries in North & South America, Europe, Asia, and Africa. Kazu Akiyama, NEROC Symposium 2020, Online, 2020/11/17 (Tue) Meet the Telescope SMT, Arizona LMT, Mexico IRAM 30m Spain APEX, Chile JCMT, Hawaii Photos: ALMA, Sven Dornbusch, Junhan Kim, Helge Rottmann, David Sanchez, Daniel Michalik, Jonathan Weintroub, William Montgomerie Meet the Telescope Photos: ALMA, Sven ALMA, Chile Dornbusch, Junhan Kim, Helge Rottmann, David Sanchez, Daniel Michalik, Jonathan Weintroub, William Montgomerie SPT, South Pole SMA, Hawaii From Observations to Images Kazu Akiyama, NEROC Symposium 2020, Online, 2020/11/17 (Tue) Credit: Lindy Blackburn EHT Hardware EHT Backend (R2DBE, Mark 6) Recording Rate: - VLBA, GMVA: 2-4 Gbps - EHT: 32 Gbps (2017), 64 Gbps (2018-) Kazu Akiyama, NEROC Symposium 2020, Online, 2020/11/17 (Tue) From Observations to Images MIT Haystack Observatory 8 TB x 8 HDD (x 92 modules) Credit: Bryce Vickmark Credit: Lindy Blackburn Kazu Akiyama, NEROC Symposium 2020, Online, 2020/11/17 (Tue) Data Calibration HOPS Pipeline (EHT-HOPS) CASA Pipeline (rPICARD) EHT AIPS Pipeline Blackburn et al.
    [Show full text]
  • "Seeing a Black Hole" the First Image of a Black Hole from the Event Horizon Telescope
    "Seeing a Black Hole" The First Image of a Black Hole from the Event Horizon Telescope Matthew Newby Temple University Department of Physics May 1, 2019 "Seeing a Black Hole" The First Image of a Black Hole from the Event Horizon Telescope ● Black Holes – a Background ● Techniques to observe M87* ● Implications Matthew Newby, Temple University, May 1, 2019 2 "Seeing a Black Hole" Matthew Newby, Temple University, May 1, 2019 3 What is a black hole? Classical escape velocity: Let vesc → c, and the escape velocity is greater than the speed of light → “Black” Matthew Newby, Temple University, May 1, 2019 4 In General Relativity Einstein (and Hilbert) Field Equation Metric tensor Stress-energy tensor Ricci curvature tensor Scalar curvature “Solution” (“source” term) Matthew Newby, Temple University, May 1, 2019 5 Schwarzschild Metric Spherically symmetric, isolated, vacuum solution: Constant rs Schwarzschild Radius: Looks like classical escape velocity! Matthew Newby, Temple University, May 1, 2019 6 Space-Time Interval ● τ is proper time ● t is time measured at infinity (τ∞) ● r, θ, φ, are Schwarzschild spherical coordinates (i.e., coordinates as viewed at infinity) ● ds is a path element in spacetime Matthew Newby, Temple University, May 1, 2019 7 General Relativistic Time Dilation Allow a photon emitted at r to travel to infinity; in that photon’s rest frame: Since the frequency of a photon is a proper time interval, this implies that the photon’s frequency (and energy) are lower as it travels away from a spherical mass. Matthew Newby, Temple University, May 1, 2019 8 The Event Horizon rs is the “Surface of infinite redshift” or event horizon The event horizon is the “surface” of a black hole.
    [Show full text]
  • An Unprecedented Global Communications Campaign for the Event Horizon Telescope First Black Hole Image
    An Unprecedented Global Communications Campaign Best for the Event Horizon Telescope First Black Hole Image Practice Lars Lindberg Christensen Colin Hunter Eduardo Ros European Southern Observatory Perimeter Institute for Theoretical Physics Max-Planck Institute für Radioastronomie [email protected] [email protected] [email protected] Mislav Baloković Katharina Königstein Oana Sandu Center for Astrophysics | Harvard & Radboud University European Southern Observatory Smithsonian [email protected] [email protected] [email protected] Sarah Leach Calum Turner Mei-Yin Chou European Southern Observatory European Southern Observatory Academia Sinica Institute of Astronomy and [email protected] [email protected] Astrophysics [email protected] Nicolás Lira Megan Watzke Joint ALMA Observatory Center for Astrophysics | Harvard & Suanna Crowley [email protected] Smithsonian HeadFort Consulting, LLC [email protected] [email protected] Mariya Lyubenova European Southern Observatory Karin Zacher Peter Edmonds [email protected] Institut de Radioastronomie de Millimétrique Center for Astrophysics | Harvard & [email protected] Smithsonian Satoki Matsushita [email protected] Academia Sinica Institute of Astronomy and Astrophysics Valeria Foncea [email protected] Joint ALMA Observatory [email protected] Harriet Parsons East Asian Observatory Masaaki Hiramatsu [email protected] Keywords National Astronomical Observatory of Japan Event Horizon Telescope, media relations, [email protected] black holes An unprecedented coordinated campaign for the promotion and dissemination of the first black hole image obtained by the Event Horizon Telescope (EHT) collaboration was prepared in a period spanning more than six months prior to the publication of this result on 10 April 2019.
    [Show full text]
  • Feature Extraction on Synthetic Black Hole Images
    Feature Extraction on Synthetic Black Hole Images Joshua Yao-Yu Lin 1 George N. Wong 1 Ben S. Prather 1 Charles F. Gammie 1 2 3 Abstract 80 80 The Event Horizon Telescope (EHT) recently re- 60 60 leased the first horizon-scale images of the black 40 40 hole in M87. Combined with other astronomical 20 20 data, these images constrain the mass and spin of as 0 0 the hole as well as the accretion rate and magnetic µ 20 20 flux trapped on the hole. An important question − − 40 40 for EHT is how well key parameters such as spin − − 60 60 and trapped magnetic flux can be extracted from − − 80 80 − 75 50 25 0 25 50 75 − 75 50 25 0 25 50 75 present and future EHT data alone. Here we ex- − − − − − − plore parameter extraction using a neural network µas µas trained on high resolution synthetic images drawn from state-of-the-art simulations. We find that the Figure 1. Synthetic image examples. Left panel: full resolution neural network is able to recover spin and flux simulated black hole image based on numerical model of black with high accuracy. We are particularly interested hole accretion flow with M87-like parameters. Right panel: the in interpreting the neural network output and un- same simulated image convolved with 20µas FWHM Gaussian derstanding which features are used to identify, beam meant to represent the resolving power of the EHT. e.g., black hole spin. Using feature maps, we find that the network keys on low surface brightness with data from other sources, the EHT images constrain features in particular.
    [Show full text]
  • Reasons for Lack of Diversity in Open Source: the Case Katie Bouman
    Reasons for lack of diversity in open source: The case Katie Bouman Lina Wang and Kristina Weinberger February 9, 2020 1 Introduction Open Source communities face similar problems with diversity as other areas of Computer Science. In this paper, we aim to take a look at the current situation and why diversity matters. Then, we discuss the relatively recent events around Katie Bouman and the first picture of a Black Hole as an example for some of the possible reasons for a lack of women. This is followed by an overview of other common issues that may contribute to imbalances in Open Source contributions. Finally, we spend some time looking at positive developments and initiatives aimed at bringing in more diversity. 1.1 Current Status of Diversity in Open Source It is well known that women and other minority groups tend to be underrepresented in computer science in general. This lack of diversity also extends into open source projects. According to a survey among more than 5000 randomly selected github1 users, more than 90% of all contributors to open source projects identify as male, while only around 3% identify as female [24]. The survey results show a lower percentage of women, non-binary people and racial minorities than present in the wider area of computer science [7], which is already well below the general population. While this survey only shows a narrow slice of the entire open source community and other communities around free and open technologies, it is reasonable to expect a relative lack of women and other minorities overall.
    [Show full text]
  • A Shot in the Dark: the First Image of a Black Hole
    A Shot in the Dark: The First etry (VLBI) (visit EHT’s home page for more information[14]). VLBI create a collection of radio Image of a Black Hole wave detections that when pieced together created Written by Priscilla Dohrwardt a ”symphony” of noise (i.e radio waves). The black hole in question is only a mere 55 As Johnny Cash once said, ”I fell into a burning million light years from earth in the Messier 87 ring of fire” and on April 10th, so did the world. galaxy. It is a supermassive black hole, weighing The first ever black hole image graces us with a in at 6.5 billion times the mass of our sun. To put black interior surrounded by what is referred to as that in perspective, the center of the black hole a ring of fire. is about the size of the Milky Way Galaxy. Not only are we all astonished by this mammoth, but even Professor Heino Falcke, the man behind the project, said, ”It is one of the heaviest black holes that we think exists. It is an absolute monster, the heavyweight champion of black holes in the Uni- verse” [15]. The team who captured the iconic black hole image happened to be in a collaborative project between NASA, MIT and various other partnered The first image of a black hole, located in countries within the National Science Foundation. the Messier 87 galaxy was captured by the They used 8 different radio telescopes around the Event Horizon Telescope collaboration. Credits: world to collect an array of information.
    [Show full text]
  • First Results from the Event Horizon Telescope
    First results from the Event Horizon Telescope Dom Pesce On behalf of the Event Horizon Telescope collaboration March 10, 2020 EDSU conference The goal (ca. 2017) The goal is to take a picture of a black hole The goal (ca. 2017) Subrahmanyan Chandrasekhar The goal is to take a picture of a black hole • General relativity makes a prediction for the apparent shape of a black hole “It is conceptually interesting, if not astrophysically very important, to calculate the precise apparent shape of the black hole… Unfortunately, there seems to be no hope of observing this effect.” - James Bardeen, 1973 James Bardeen Photon ring and black hole shadow The warped spacetime around a black hole causes photon trajectories to bend event horizon b photon orbit Photon ring and black hole shadow The warped spacetime around a black hole causes photon trajectories to bend The closer these trajectories get to a critical impact parameter, the more tightly wound they become Photon ring and black hole shadow The warped spacetime around a black hole causes photon trajectories to bend The closer these trajectories get to a critical impact parameter, the more tightly wound they become Photon ring and black hole shadow The warped spacetime around a black hole causes photon trajectories to bend The closer these trajectories get to a critical impact parameter, the more tightly wound they become The trajectories interior to this critical impact parameter intersect the horizon, and these collectively form the black hole “shadow” (Falcke, Melia, & Agol 2000) while the bright surrounding region constitutes the “photon ring” Adapted from: Asada et al.
    [Show full text]
  • Who Is Katie Bouman?
    Who Is Katie Bouman? In April 2019, the first image of a black hole was captured in a photograph. Dr Katie Bouman (aged 29) was a member of the team that made this happen. She has made headlines around the world because of this huge achievement. Previously, it was believed that black holes were invisible. Early Life Katie was born on 9th May 1989 in Indiana, USA. She showed a keen interest in science and first learnt about the Event Horizon Telescope (EHT) while she was at school. The EHT is a project that collects information from telescopes all over the world. It has helped scientists find out more about black holes. When she left school, she spent several years at university, studying computer science and electrical engineering, before becoming a member of the EHT team. How Did She Do It? In 2016, she gave a talk to explain how it might be possible to take a photo of a black hole, something scientists once thought was impossible. Katie led the way in developing an algorithm, a set of mathematical calculations, that allowed data from different telescopes to be gathered so the image could be created. Teamwork Although Katie made headlines around the world, she was keen to point out that the photo was a team effort, involving 200 scientists and taking over three years to achieve. The scientists were delighted when they finally captured the image on 10th April 2019. 'Black Hole Image Makes History' by NASA Goddard Photo and Video is licensed under CC BY 2.0 'Black Hole Image Makes History' by NASA Goddard Photo and Video is licensed under CC BY 2.0 Black Hole Facts The largest stars in space sometimes explode.
    [Show full text]
  • Written Testimony of Katherine L. Bouman, Phd Before the Committee on Science, Space, and Technology United States House of Repr
    Written Testimony of Katherine L. Bouman, PhD before the Committee on Science, Space, and Technology United States House of Representatives May 16, 2019 INTRODUCTION Chairwoman Johnson, Ranking Member Lucas, and Members of the Committee, it is an honor to be here today. I thank you for your interest in studying black holes through imaging, and your support for this incredible breakthrough. My name is Katherine (Katie) Bouman. I am currently a postdoctoral fellow at the Harvard- Smithsonian Center for Astrophysics, and in a few weeks will be starting as an Assistant Professor at the California Institute of Technology. However, like many Event Horizon Telescope (EHT) scientists, I began contributing to this project as a graduate student. My primary role in the project has been developing methods to reconstruct images from the EHT data, as well as designing procedures to validate these images. This morning I want to tell you about one piece of the full story that made this image possible: the diverse team and computational procedures used to make the first image of a black hole from data collected at seven telescopes around the globe. THE EHT’S COMPUTATIONAL TELESCOPE On April 10th we presented the first ever image of a black hole. This stunning image shows a ring of light surrounding the dark shadow of the supermassive black hole in the heart of the Messier 87 (M87) galaxy. Since M87 is 55 million light years away, this ring appears incredibly small on the sky: roughly 40 microarcseconds in size, comparable to the size of an orange on the surface of the Moon as viewed from our location on Earth.
    [Show full text]
  • Event Horizon Telescope: the Black Hole Seen Round the World
    EVENT HORIZON TELESCOPE: THE BLACK HOLE SEEN ROUND THE WORLD HEARING BEFORE THE COMMITTEE ON SCIENCE, SPACE, AND TECHNOLOGY HOUSE OF REPRESENTATIVES ONE HUNDRED SIXTEENTH CONGRESS FIRST SESSION MAY 16, 2019 Serial No. 116–19 Printed for the use of the Committee on Science, Space, and Technology ( Available via the World Wide Web: http://science.house.gov U.S. GOVERNMENT PUBLISHING OFFICE 36–301PDF WASHINGTON : 2019 COMMITTEE ON SCIENCE, SPACE, AND TECHNOLOGY HON. EDDIE BERNICE JOHNSON, Texas, Chairwoman ZOE LOFGREN, California FRANK D. LUCAS, Oklahoma, DANIEL LIPINSKI, Illinois Ranking Member SUZANNE BONAMICI, Oregon MO BROOKS, Alabama AMI BERA, California, BILL POSEY, Florida Vice Chair RANDY WEBER, Texas CONOR LAMB, Pennsylvania BRIAN BABIN, Texas LIZZIE FLETCHER, Texas ANDY BIGGS, Arizona HALEY STEVENS, Michigan ROGER MARSHALL, Kansas KENDRA HORN, Oklahoma RALPH NORMAN, South Carolina MIKIE SHERRILL, New Jersey MICHAEL CLOUD, Texas BRAD SHERMAN, California TROY BALDERSON, Ohio STEVE COHEN, Tennessee PETE OLSON, Texas JERRY MCNERNEY, California ANTHONY GONZALEZ, Ohio ED PERLMUTTER, Colorado MICHAEL WALTZ, Florida PAUL TONKO, New York JIM BAIRD, Indiana BILL FOSTER, Illinois JAIME HERRERA BEUTLER, Washington DON BEYER, Virginia JENNIFFER GONZA´ LEZ-COLO´ N, Puerto CHARLIE CRIST, Florida Rico SEAN CASTEN, Illinois VACANCY KATIE HILL, California BEN MCADAMS, Utah JENNIFER WEXTON, Virginia (II) CONTENTS May 16, 2019 Page Hearing Charter .....................................................................................................
    [Show full text]
  • Antonio Torralba Katie L
    DAILY Interview with: Presenting Work by: Editorial with: Antonio Torralba Katie L. Bouman Rita Cucchiara John McCormac Today’s Picks by: Workshop: Dèlia The Joint Video and Language... Fernandez A publication by Wednesday 2 Dèlia’s Picks For today, Wednesday 25 Dèlia Fernandez is an industrial PhD candidate at the startup company Vilynx and the group of Image and Video Processing (GPI) in the Polytechnic University of Catalonia (BarcelonaTech). “My research topic aims at introducing knowledge bases in the automatic understanding of video content. ICCV is the perfect place to learn about last trends and techniques for image and video analysis, get in touch with the community and hear about new interesting topics and challenges.” Dèlia presented a paper in the workshop on web-scale vision and social media. The paper is "VITS: Video Dèlia Fernandez Tagging System from Massive Web Multimedia Collections". Find it here! Dèlia’s picks of the day: • Morning P3-34: Temporal Dynamic Graph LSTM for Action-Driven Video Object Detection P3-42: Hierarchical Multimodal LSTM for Dense Visual-Semantic Embedding P3-50: Unsupervised Learning of Important Objects From First-Person Videos P3-52: Visual Relationship Detection With Internal and External Linguistic Knowledge Distillation P3-72: Common Action Discovery and Localization in Unconstrained Videos P4-69: Learning View-Invariant Features for Person Identification in Temporally Synchronized Videos Taken by Wearable Cameras P4-74: Chained Multi-Stream Networks Exploiting Pose, Motion, and Appearance for Action Classification and Detection “Venice is a magic place where you can feel Italian traditions alive! I was delighted to find out ICCV was taking place here and I am enjoying the chance to have nice Italian food and explore the city.” Wednesday Summary 3 Dèlia’s Picks Antonio Torralba Katie L.
    [Show full text]
  • How to Navigate an Information Media Environment Awash in Manipulation, Falsehood, Hysteria, Vitriol, Hyper-Partisan Deceit and Pernicious Algorithms
    How to Navigate an Information Media Environment Awash in Manipulation, Falsehood, Hysteria, Vitriol, Hyper-Partisan Deceit and Pernicious Algorithms A Guide for the Conscientious Citizen A Reflection Paper prepared for the Canadian Commission for UNESCO and the Canadian Committee for World Press Freedom By Christopher Dornan Ottawa, Canada, August 2019 To quote this article: DORNAN, Christopher. ‘‘How to Navigate an Information Media Environment Awash in Manipulation, Falsehood, Hysteria, Vitriol, Hyper-Partisan Deceit and Pernicious Algorithms’’, the Canadian Commission for UNESCO’s IdeaLab, July 2019. The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of the Canadian Commission for UNESCO. About the author Christopher Dornan teaches at Carleton University, where he served for nine years as director of the School of Journalism and Communication and six years as director of the Arthur Kroeger College of Public Affairs. His academic work has appeared in venues from the Media Studies Journal to the Canadian Medical Association Journal. He is the co-editor (with Jon Pammett) of The Canadian Federal Election of 2015, along with five previous volumes in this series. In 2017, he wrote the Canadian Commission for UNESCO reflection paper “Dezinformatsiya: The Past, Present and Future of ‘Fake News’.” He is chair of the board of Reader’s Digest Magazines Canada, Inc. ii Table of Contents Introduction ................................................................................................................................
    [Show full text]