The magazine of the algorithm community October 2017

Exclusive Interview with Yoshua Bengio!

Upcoming Events BEST OF MICCAI: 32 pages!!! Workshops, Presentations, Challenges, Tutorial Women in Science: Sandrine De Ribaupierre Christine Tanner Project Management: 7 Tips by Ron Soferman We Tried for You: FirstAid on Google Cloud in 30 minutes! Spotlight News Review of Research Paper by Apple: Learning from Simulated and Unsupervised Images through Adversarial Training A publication by 2 Read This Month Computer Vision News Yoshua Bengio Dana Cobzas Research of the month Review by Assaf Spanier

14 38 04 Women in Science Christine Tanner MICCAI Workshops Project Management 7 Tips by Ron Soferman

22 08 44 Women in Science Spotlight News Sandrine De Ribaupierre Upcoming Events

10 37 55 04 Best of MICCAI Daily 2017 37 Spotlight News Interview: From elsewhere on the Web Yoshua Bengio Review of Research Paper 38 Learning from Simulated and Presentations: Unsupervised Images through Benjamin Hou, Dana Cobzas Adversarial Training Women in Science: 44 Project Management Sandrine De Ribaupierre, 7 Tips by Ron Soferman Christine Tanner 48 We Tried for You 4 Challenges FirstAid on Google Cloud in 30’! 7 Workshops 55 Computer Vision Events 1 Tutorial Calendar of August-October events Welcome 3 Computer Vision News

Dear reader, This October issue of Computer Vision News opens with the exclusive interview that Professor Yoshua Bengio granted me at MICCAI2017: in addition to the exceptional merits of Prof. Bengio in the computer vision community, the reasons why I recommend that you read this interview are the precious take- home thoughts suggested by him during our fascinating discussion. I was myself fascinated by his remarkable kindness and availability in setting up this candid talk, which you will read at page 4. I am grateful to MICCAI for the opportunity to add this interview to the many others that we had the chance to conduct with major scientists in our community: Jitendra Malik, Michael Black, Nassir Nawab and more. Today I wish to thank MICCAI (in particular Computer Vision News Simon Duchesne and Wiro Niessen) for yet another reason: partnering with us and letting us cover the conference with our MICCAI Daily Editor: publication. We gained direct exposure to Ralph Anzarouth brilliant technology and personalities: have a Engineering Editor: glimpse of all this in our BEST OF MICCAI Assaf Spanier section, for the preparation of which we owe a lot to Tal Arbel, who was Satellite Events Chair. Publisher: Before you ask, I will point out that this issue is RSIP Vision not only about MICCAI. You will also read our regular reviews of outstanding research Contact us papers, as well as our regular sections: the Give us feedback project management column by RSIP Vision’s CEO Ron Soferman; the list of upcoming Free subscription computer vision events; the Spotlight News; Read previous magazines and more… Enjoy the reading!

Copyright: RSIP Vision All rights reserved Ralph Anzarouth Unauthorized reproduction Marketing Manager, RSIP Vision is strictly forbidden. Editor, Computer Vision News 4 Yoshua Bengio Computer Vision News

Prof. Yoshua Bengio is Full Professor at the Department of Computer Science “I think the reason I’ve and Operations Research, Université de Montréal. He granted me a been successful in my fascinating interview at MICCAI and career isn’t because I’m his kindness is the most valuable lesson I retain from our meeting. smarter than anyone What would you say is the most else. It’s just that I’m precious lesson that you learned from your teachers? able to focus a lot.” Well, I would say the enthusiasm for learning, for discovering, for understanding. Understanding is your main drive? Yes. In fact, I would say that it’s a physical thing. It’s like a sensory pleasure, almost, when you realize something, understand, things are clicking. It’s the “Eureka” moment. It’s one of the foods for my happiness. Does that mean that if you don’t understand, it is like a physical wound? Yes, especially if I believe that it’s something that matters to me or that I can do something about. It bothers you to see the solution there and not be able to grab it? I would say it differently: when I’m and over, and let that brew in your mind. thinking a lot about a problem, and Do you ever consult with your sometimes I have a little bit of a students and see that maybe the revelation of some idea, like coming in solution comes from somebody who is the side behind my mind, I think of it not thinking on the problem all the like some little threads that are time and sees it from the outside? appearing. And if I don’t pay attention, they would go away. But I can pull on Correct! All the time. In fact, they those threads, and then more comes, spend more time, usually, thinking and then maybe nothing comes for a about a particular problem. I’m very while, and then something else, dispersed, and I have many students, another thread, shows up that I can and so I would say there are two kinds pull. And it’s like an unpredictable of creativity that are really pillars of my process of discovery. But it works only research. if you think about it, if you make your One is alone: it’s what I was talking mind ask yourself the questions over about with those threads and so on. Yoshua Bengio 5 Computer Vision News

And the other is in brainstorming with reason I’ve been successful in my colleagues, researchers, students, and career isn’t because I’m smarter than there are also “Eureka” moments from anyone else. It’s just that I’m able to there. But it’s a very different kind of focus a lot. And of course, that’s process. The brainstorming, of course, connected to the time, because you is something that happens very fast need time to focus. You can’t be and is very active, whereas the other constantly bombarded by email and one is something more meditative, and meetings and people, whatever. You so it’s a very different type. need to have the mental space to Can you remember a “wow” moment focus. That takes time, and then also, that you had with one of your to actually do it. students, somebody who surprised you with some particular spark? Right, yeah! Recently, I worked with Benjamin Scellier, one of my Ph.D. students, on a problem that I’ve been thinking of a lot for the last few years that has to do with how the brain might do something like backpropagation: this is the mathematical technique that is the workhorse of deep learning, and it’s really behind a lot of the success we see, and we don’t know if brains are “Science is not done alone. doing something similar. I had a Science is a community number of ideas that I fed him, and then he found something I hadn’t effort. We build on each thought about that’s hard to explain other’s ideas all the time.” because it’s very mathematical, but I was really impressed when he came to Do you have any tips for those for me with that. It’s been the basis of whom the focus is there, but they more work we’ve been doing since can’t get it? then. I don’t know. Open your horizons. Look What are the best friend and the at other things. I get inspiration from worst enemy of a scientist? all kinds of horizons. I read a lot. So, Best friend? Time, having time. just being plunged in the community - You’d like to have more? remember, I told you there’s these moments alone, and then there’s Yes, yes. moments of brainstorming. So And the worst enemy? Lack of time? logically, I get a lot of inspiration and Lack of time, yes, yes. I would add ideas from reading other scientists’ something to put that into context, I work. A lot of what makes a good think scientists - well, at least the way scientist is that he or she spends a lot I’ve been working - rely on ability to of time reading, talking to other focus. So focus, concentration of scientists who understand what thought, is crucial. And I think the they’re doing. That’s a huge thing. 6 Yoshua Bengio Computer Vision News

Science is not done alone. Science is a to this, and medical images are community effort. We build on each probably the golden place for current other’s ideas all the time. deep learning and in AI in general, to Will science ever be able to master make an impact on society. Right now time? some people may feel like it’s hard, mostly because there is not enough I don’t know. I’m not a physicist. data, but it’s changing. It’s a social I see. But you’d like that! issue. There is data. There’s going to be Sure. even more. It’s not clear yet how it’s going to be shared and so on, but the size of the data sets is greatly “Listen carefully to what increasing. The potential impact of the person is saying, and these things is going to be even more really concentrate on that than what people think right now. person’s words!”

Any advice for the young students who join this conference or another conference for the first time? What should they do to get the maximum to move forward in their career? Well, at any conference, or when you read papers, you should focus on trying to understand what is going on. It’s not always easy because you get into something new, and it’s an effort. You have to accept that it’s an effort, that part of yourself will maybe want to go Prof. Bengio during his keynote lecture and have lunch or check your email or at MICCAI2017. Sitting on stage: do something else than actually do the Maxime Descoteaux and Tal Arbel effort of trying to understand. And again, listen carefully to what the person is saying, and really concentrate on that person’s words. The same thing when you read a paper. That makes a huge difference. We are at MICCAI2017 in Canada. Can you add a word about Medical Imaging? I think that what this community is doing is incredibly important, and personally, I care a lot about seeing AI being used for good. And of course, medical implications - medical is central Challenge: BraTS 7 Computer Vision News Computer Vision News by Spyridon Bakas The Multimodal Brain Tumor standardized dataset to benchmark Segmentation (BraTS) challenge is an segmentation algorithms towards international competition, with its identifying the current state-of-the-art. participants attempting to address the The amount of available data in BraTS problem of delineation and partitioning 2017, with the accompanying manually- of the most common adult brain tumors revised ground truth, enable both (glioma/glioblastoma). Segmenting the generative and discriminative methods various partitions/sub-regions of these to be applied towards providing brain tumors can clinically help towards solutions to the problem of personalizing radiation treatment in segmentation. This year alone, BraTS patients via customized target volume had 497 registered teams that definitions, allowing for refined downloaded the training data (including personalized dose escalation planning 285 mpMRI scans with accompanying on radiation regimens. BraTS 2017 went ground truth) and 53 teams that finally beyond the sole task of segmentation participated in the challenge. and called for feature extraction and BraTS 2018 will continue addressing the machine learning approaches towards same problems leading to a the prediction of patient overall survival comprehensive post-challenge analysis from pre-surgical multi-parametric MRI manuscript towards evaluating whether (mpMRI) scans. fine performance differences across With a continuously growing publicly algorithms affect further analysis based available dataset of almost 500 pre- on extracted features, or even further surgical (mpMRI) scans from 16 prediction tasks such as this of overall institutions, BraTS focuses on creating a survival.

BraTS 2017 top-ranking teams on Segmentation (Seg) and Survival Prediction (SPred) tasks. From left to right: Tom Vercauteren (University College London - #2 Seg), Tsai-Ling Yang (National Taiwan University of Science and Technology - #3(tie) Seg), Fabian Isensee (German Cancer Research Center - #3(tie) Seg), Konstantinos Kamnitsas (Imperial College London - #1 - Seg), Zeina Shboul (Old Dominion University - #1 SPred), Spyridon Bakas (University of Pennsylvania - BraTS 2017 Lead Organizer), Alain Jungo (University of Bern - #2 Spred), Craig H. Meyer (University of Virginia - #3 - Spred). 8 Workshop: CVII-STENT Computer Vision News by Su-Lin Lee

“I encourage people to [work hard] in these competitions and the results can be positive!” The author of this report Su-Lin Lee is a Lecturer at The Hamlyn Centre, Imperial College London. Here she is with the night’s poutine.

It’s the 6th year of the CVII-STENT of discussion on IVUS (intravascular (Computing and Visualization for ultrasound) segmentation and Intravascular Imaging and Computer applications on coiling and stenting, Assisted Stenting – yes, it’s a mouthful) respectively. Katharina highlighted the workshop! This MICCAI workshop has main ways in which to aid been bringing together academic, endovascular applications: improve clinical, and industrial researchers in hardware, enhance image features, the field of endovascular interventions extract additional information from to discuss the state-of-the-art in the images, and integrate information area. This year, there was a little more from other image modalities. Guy to celebrate with the recent release of discussed the new industrial the CVII-STENT book in the Elsevier- collaboration he forged after taking MICCAI Society Book series, presenting part in a past CVII-STENT challenge on imaging, treatment, and computed IVUS segmentation – “I encourage assisted technological techniques for people to [work hard] in these diagnostic and intraoperative vascular competitions and the results can be imaging and stenting. The contributors positive!” We’re hoping to bring back a of the chapters are all workshop challenge in a future workshop – watch regulars and we hope that the book this space! will be a useful reference to those in or The oral sessions were no less just entering the field. interesting, with topics ranging from Invited speakers Prof. Guy Cloutier deep learning for vessel segmentation (CHUM Research Centre, Canada) and to stent localisation in endovascular Katharina Breininger (Siemens imaging; there were plenty of Healthcare, Germany) kicked off a lot discussion points. I, Su-Lin Lee, and Luc Workshop: CVII-STENT 9 Computer Vision News

“How can you improve the patient The CVII-STENT workshop at lunchtime! There well-being?” were fantastic talks by Katharina Breininger (front row, second from left) and Guy Cloutier (front row, fourth from right). Duong (École de technologie researchers working in the field of supérieure, Canada) presented on vascular segmentation who were not robotics and motion compensation, aware of the workshop. Are you respectively, in endovascular working in this field? Are you procedures. Unexpectedly, there was a interested in meeting other little overlap in our presentations researchers in the area? We hope to where we both agreed that in this day see you at next year’s workshop! of automation and self-driving cars, Finally, what is a celebration without a we’re moving towards robotic control few drinks? We ended the night at Le and the necessary improved imaging Bureau de Poste where much poutine and navigation for these advanced and beer were had - it did seem a little systems. Luc highlighted one particular perverse though to have such a heart- issue that can be overlooked in clogging repast after a workshop on academic research in this field: “How the treatment of heart disease! can you improve the patient well- being?” We need to bridge the gap that exists between us the technologists and the clinicians who are performing these procedures on a daily basis. The workshop ended with a discussion on the future of CVII and stenting. There was a general consensus that the field required larger datasets for validation and for the implementation of deep learning techniques. There was some general discussion on the The CVII-STENT group at Le Bureau development of a data collection de Poste. Co-organisers Simone protocol that would work worldwide Balocco (front, centre) and Luc and that could lead to a large Duong (back, left) seem very happy collaborative research database. It was also noted that there are a number of with the success of the workshop! 10 Women in Science Computer Vision News Sandrine De Ribaupierre Sandrine De Ribaupierre is Associate Why did you choose to specialize in Professor in Neurosurgery at the the brain? University of Western Ontario. Because it’s fascinating. First, this is what’s driving you every day. Right? If you didn’t have a brain, you wouldn’t Sandrine, tell us about your work. wake up in the morning to go to work. Sure. I’m partly a clinician and a But I think it’s where we have the most researcher, so it means that 95% of my unknown things, so the unknown and work is clinical work where I do the idea of trying to find out more pediatric neurosurgery, so I operate on about it was what interested me. brains of little kids and teenagers. Then It’s also the most difficult part, the other 5% I’m trying to do research. probably. You could have chosen Typically, the type of research I’m easier tasks. interested in is surgical simulator to try Yeah, but difficulty is challenging, so to make it better for my trainees. It’s for me at least I need to do something visualization of medical images to that’s new and difficult. make it better for all of us, and then it’s also function imaging, looking at What is the most challenging side of this work? fMRI and DTI along lifespans, so little babies, we’re actually even going to try To be able to do both research and even fetus MRI, and then going to clinical work just because of the time pediatric, to young adults, older adults constraint. I think the most difficult to see how we think and try to part of research, per se, in our world understand a bit better about the brain right now is to try to find enough because a lot of things are still grants to be able to actually get unknown. students to help us. I think the important

“Find the right balance between professional and family life…”

Keynote at the DTI challenge, MICCAI 2015 in Munich. Just like during our interview, Sandrine is holding little Maia in her arms. Women in Science 11 Computer Vision News part about having gratitude and area is, we know where your brain is working on a project is the fact that making you move, but: do we know they can learn as well. It’s to be able to why you like music, for example? interest young people in new areas, Which area of the brain does that? because obviously we could also try to Therefore, as a surgeon, if you go and do research on our own where you’d disturb some of the networks, then the need less funding, but then the problem is that person may not enjoy problem is once you’re gone there is music anymore and they change. no newer generation; it’s a big gap, Their taste in music changes? and therefore it’s not useful. Either taste, or altogether and they What aspect of working with the don’t enjoy it anymore. brain is the most difficult? So, if I let you operate on my brain, That’s a good question. [Laughs] you can make me enjoy ? Thank you. Maybe, or maybe it makes you hate all [Laughter] The most difficult thing, as a types of music, which would be surgeon, to work with the brain is the horrible. fact that if you cut in the wrong area or How can you stop your hands from if you actually don’t know… Everybody is slightly different, so your language trembling when you’re operating on an organ so vital to your patient? area is probably not in exactly the same place than somebody else’s Don’t drink too much coffee. [Laughs] language area. If you operate, you go If you don’t drink too much coffee and to the wrong area, then the problem is you have steady hands overall… that person is going to have deficits, so When you do surgery you typically they won’t be able to speak anymore if focus on what you’re doing, you don’t you take language, or they won’t be think about: “Oh, what is that person able to write, or they won’t be able to doing?” and so on. You really have to do math, and a lot of things that we focus on the task you’re doing, and don’t know. So, we know about the big you’re not thinking about the big functions; we know where the motor picture. Before surgery and after surgery,

“I need to do something that’s new and difficult”

Sandrine in front of Mount Everest 12 Women in Science Computer Vision News

obviously, you think about the person technical skills you’re using at the time. and you know what they want, what What is your worst nightmare? they like and things, but once you do My worst nightmare? Probably to the surgery you want to focus exactly on what you’re doing, and therefore harm a patient. My worst nightmare overall is probably not being able to do you still have some pressure on your research anymore, because then it shoulders, but it’s not about the whole picture – you just focus, and therefore means every day is the same, you just treat patients, but you’re not able to your hands don’t tremble that way. think and to learn new things. Can you depersonalize what you’re What is your main drive? doing? I think so, yeah. I think my main drive is to discover new things: to better understand how Can you trick yourself into thinking we’re made and how we think, and to you’re dealing with meat? think that maybe little kids can grow No, I don’t do it quite like that. It’s up knowing more about the brain, and more you’re dealing with a specific therefore from a health point of view area in the brain, so you think: “I’m we’re able to treat better disease, but there…” For example, you open the also from a knowledge point of view, skull, then you open the membrane maybe you’re better to teach better as around the brain, then you go to the well. If I hadn’t choose the brain, I would brain, and then you actually dig into probably have chosen another surgical the brain, but you know which area it specialty, because I need to do is. At that point, you don’t say: “I something with my hands. wonder what that person ate for Generateabettersituationwithyourhands. lunch.” You’re going to say: “I need to Yes, exactly! focus on what I’m doing right now. I know that just behind it’s the motor area so I don’t have to go too much behind. Oh, watch out, there are vessels here”, so you’re focusing on a little task, so you don’t have the passion about things; you’re just focused and concentrated on the Women in Science 13 Computer Vision News Have you ever experienced any that the public isn’t aware of? difficulties being a woman, a People tend to imagine that every cell neurosurgeon, or a teacher? in the brain is useful, and that’s not When I trained as a young quite true. If you see when we’re neurosurgeon, one of the professors in operating, people tend to say: “If you Switzerland said: “Neurosurgery is not cut a millimeter away, then it’s a really for women, so are you sure you catastrophe!” That’s true in some areas, want to do that?”, therefore I decided I but it’s not true everywhere. One of the wasn’t going to train in that hospital, I myths about the brain is the fact that was going to train somewhere else. everything is useful and that brain That was my first challenge. After that, surgery is really precise everywhere. because the two hospitals were That’s true, in some areas, but most of associated, I kept meeting with him and the time we have a bit of slack, a bit of showed him that I was perfectly margin of error around that. capable of being a neurosurgeon. What do you think science will Did he change his mind? discover about the brain before the I’m not sure he ever changed his mind, end of your career? but there are a few women in One thing I’m trying to work on right Switzerland that are neurosurgeons now is looking at fMRI in the baby now, therefore he probably, at least in brain, in the fetus brain. Right now front of people, will admit that women we’re able to do it in little babies that are able to do it. In his mind, I’m not are just born or as a child, but what we sure. There are some older people that want to try to do is in a fetus, and will never change their mind… [Laughs] therefore try to see if we’re able to Was it even a drive for you to show he discover things before they are even was wrong or did you not really care? born. One of the things we would like No, I didn’t really care. to see is: we know that some of the mental illnesses, such as autism and What would you suggest to a young schizophrenia, might have some ground woman starting her career, hearing the even before people are born, during same remark? pregnancy; and maybe some of the You can either decide that you’re going environment factors, things that a to prove them wrong, or you can decide pregnant woman might do or not you can go somewhere else where during her pregnancy, could influence people won’t put obstacles in your how the brain of the baby is. work all of the day. Overall, I think you So you’d be able to treat them before need to be a bit better than a man to be birth? able to go up the echelons. Otherwise, Maybe. The main thing is: can we you’re not getting promoted. I think, actually monitor it? We are not there overall, we still have to show that we’re yet. The idea would be to try to monitor better in order to get the same it, and then if we can monitor it, then it position, but that’s okay, it’s easy. I also might be possible to act on it and treat advise to find the right balance it before. That would be one of the between professional and family life. things I want to achieve before the end What do you know about the brain of my career. 14 Presentation: Dana Cobzas Computer Vision News An unbiased penalty for sparse classification with application to neuroimaging data “We also want to know Dana Cobzas is an assistant professor at MacEwan University how different anatomical in Edmonton, Alberta. structures differ in shapes, rather than in volume” Dana Cobzas gave a poster In the study, a new regularization is presentation on her current work introduced, so it’s very much on the which focuses on the statistical side of mathematical side. Along with a new population analysis. Dana’s goal is to regularization, a new penalty was also design a method that - given two introduced. So, it’s formulated as populations, like the healthy and the sparse classification. The usual diseased - will detect the significant penalties are L1 to impose the anatomical difference between them, sparseness, using what is called a SCAD and be significant enough that it’s penalty, which is a different type of hopefully related to the diseased regularization that is better than the population. original L1.

Dana at her poster Presentation: Dana Cobzas 15 Computer Vision News

“We also want to know how different anatomical structures differ in shapes, rather than in volume”

Dana works on this study with people Method of Multipliers. That is from the department of statistics in optimizing these kind of problems, so the mathematical sciences, and thus, that was the most difficult part, coming from a computer science doing the numerical method.” The environment, she admits, “I think this method is designed such that it was maybe the most difficult part, to brings a solution, It’s a numerical be able to understand the peculiarity method that is for the optimization, of SCAD and work with them.” and is proved to cover the solution. From an algorithmic point of view, If given the chance to add one more Dana explains that the computers feature to the model, Dana says she task is the optimization of these would like to extend it to shape data, problems. It results in an energy to define the same type of model for minimization, which, unlike the class shapes, which is also very relevant, L1 minimization, is non-convex, and because, she says, “we also want to also has discontinuity, just like L1. “So, know how different anatomical we use it in what is called ADMM to structures differ in shapes, rather solve it, which is an optimization than in volume.” method that is called the Alternate 16 OMIA / RETOUCH Computer Vision News by Hrvoje Bogunovic

“…a fascinating insight into the enormous potential and also some pitfalls of adaptive optics for Prof. Joseph Carroll from Medical College of Wisconsin giving the retinal imaging.” keynote on adaptive optics

The ever-growing ophthalmic image analysis community got together for the 4th time at MICCAI 2017. This year the Workshop on Ophthalmic Image Analysis (OMIA) was organized jointly with the Retinal OCT Fluid Challenge (RETOUCH). Thus, in addition to presenting their work the participants had an opportunity to go head to head on the tasks of retinal fluid detection and segmentation in optical coherence tomography (OCT) scans. Machine and deep learning methods were prominent players in the Donghuan Lu from Simon Fraser workshop for achieving successful University presented with the 1st classification, segmentation and embedding of 2D fundus or 3D OCT place award at RETOUCH challenge images. The keynote speaker, Prof. by Hrvoje Bogunovic from Medical Joseph Caroll, gave us a fascinating University of Vienna insight into the enormous potential and also some pitfalls of adaptive OMIA Website optics for retinal imaging. RETOUCH Website OMIA / RETOUCH 17 Computer Vision News

Eight teams participated in the OMIA/RETOUCH offer a lot of added RETOUCH challenge, which was value! convincingly won by a team from See you all in Granada, Spain for Simon Fraser University, Canada - OMIA/RETOUCH 2018! congratulations! - who performed the best in both the fluid detection and the fluid segmentation tasks. Their “…a combination of a approach consisted of a combination of a graph-cut for retinal layer graph-cut for retinal layer segmentation, a fully convolutional segmentation, a fully neural network for subsequent fluid segmentation and a random forest convolutional neural classification for post-processing. network for subsequent Interestingly, there were also several papers on retinal image analysis in fluid segmentation and a the main conference, which shows random forest classification that the interest for this topic is clearly growing. OMIA/RETOUCH aim for post-processing” to bring growing sections of the interdisciplinary research community together within MICCAI, arguably the Read more about MICCAI 2018 in this best technical conference in the world interview and editorial prepared for medical image analysis. Hence with Alex Frangi and Julia Schnabel.

Left to right, the organizers of the event : Yanwu Xu, Emanuele Trucco, Xinjian Chen, Mona K. Garvin, Hrvoje Bogunovic, Freerk Venhuizen, Clara I. Sanchez 18 Workshop: DLMIA Computer Vision News by Gustavo Carneiro

“Should we invest more research resources in data acquisition, data labelling, and powerful Workshop chairs: Jacinto Nascimento, deep learning Joao Manuel Tavares and Gustavo Carneiro model training?”

The third edition of the Deep question present in several Learning in Medical Image Analysis discussions, not only at DLMIA, but at (DLMIA) workshop has witnessed its MICCAI 2017 at large. largest number of paper submissions (73) and acceptances (38), with Should we invest more research excellent works on topics ranging resources in data acquisition, data from adversarial training to labelling, and powerful deep learning segmentation, image normalisation model training? Or should we devote and registration. more research resources to the exploration of contextual information Since DLMIA’s first edition in 2015, in order to allow for the development the number of paper submissions of lighter deep learning models? and acceptances has almost doubled every year, showing a growing Dr. Ronald Summers presented his interest by the MICCAI community on unique perspective on how deep this topic. In addition to the papers, learning is influencing radiology. In DLMIA had three outstanding invited fact, an important question that speakers. young radiologists are currently asking is whether deep learning will Dr. Kevin Zhou presented his recent replace them in the future. works on recognition, segmentation and parsing based on deep learning, Although it is unlikely that this will where one of the main questions he happen in the near future, it is quite posed to the audience was the clear that the systems currently being importance of exploring contextual developed in medical image analysis information in deep learning will have an important impact in methods. This was an important radiology. Workshop: DLMIA 19 Computer Vision News

“Exploring contextual information in deep learning methods”

An important message in Dr. Chris and the Butterfly Network Pal’s talk was the need to develop showcasing their impressive deep learning methods that can be ultrasound machines. To summarise, used in the automatic normalisation we believe that this DLMIA was one of medical images – this was also a of the most successful satellite topic explored by a couple of papers events of MICCAI 2017, with a large in the workshop. The industry talks audience, high-quality papers and promoted by DLMIA were also very excellent invited talks. Finally, we are well received by the audience, with planning to organise DLMIA 2018 in NVidia promoting their new powerful Granada, and we hope to receive GPUs and the Deep Learning Institute many new paper submissions! 20 Tutorial: Designing Benchmarks and Challenges… Computer Vision News by Adriënne Mendrik

“There is to date no long-term solution that fully facilitates the sustainability of grand challenges in the biomedical image analysis field” Adriënne Mendrik (Netherlands eScience Center)

Tutorial on Designing Benchmarks challenge). Urging challenge and Challenges for Measuring organizers to either follow a Algorithm Performance in Biomedical qualitative or a quantitative Image Analysis. experimental design, and correcting Challenges and benchmarks are a for leaderboard climbing using the re- great way to advance the biomedical usable holdout (Dwork et al) or the image analysis field. Michal Kozubek Ladder method (Blum et al). showed why this is important and what design choices have been used for challenges and benchmarks. However, with over 150 challenges that have now been organized in the field, the time is ripe to review these challenges. Lena Maier-Hein, Matthias Eisenmann and Annika Reinke presented some of the current weaknesses of grand challenges that were found when reviewing the challenges, and stressed the importance of well-described challenges. Adriënne Mendrik and Stephen Aylward presented a theoretical framework to help guide Organizer challenge design and redefine the Michal Kozubek objective of grand challenges, to (Masaryk University, either gain insight (insight challenge) Czech Republic) or solve a problem (deployment Tutorial: Designing Benchmarks and Challenges… 21 Computer Vision News

Lena Maier-Hein (DKFZ, Germany)

The session was closed with a Most research challenges are discussion. One of the topics was the currently not well maintained due to role of companies in organizing lack of funding and personnel. Long- challenges. Companies have term funding for an infrastructure to interesting data and clinically host challenges, with data storage and important questions for the cloud computing facilities is currently community to work on, but lacking. Although there are multiple restrictions apply that limit data platforms like Kaggle, COMIC (grand- sharing. Rather than sharing all test challenges.org), COVALIC (Kitware), data, algorithm submission could be VISCERAL, CodaLab (), virtual enforced, such that algorithms can be skeleton and DREAM challenges, applied to the test data by the there is to date no long-term solution organizers, without having to release that fully facilitates the sustainability the data. Another discussion topic was of grand challenges in the biomedical the sustainability of grand challenges. image analysis field.

Stephen Aylward (Kitware, USA) 22 Women in Science Computer Vision News Christine Tanner Christine Tanner is a senior researcher How did you like it at Siemens? at ETH. 2 days after this interview, she It was very good. I grew into my job. I received the MICCAI IJCARS Best was doing software engineering at the Paper Runner-up Award. end, as well as doing tests and integration. But I felt like something Christine, are you at MICCAI 2017 as a was missing, so I went back to study. supervisor? Let’s go one step back. How, in the Yes, exactly. I’m here as a supervisor of relatively open and modern German Neerav Karani, a student of mine who society, a girl who is strong in math is is presenting a poster. less supported? What is your work about? It was quite a while ago. I mean, on the I’m doing research in medical image one hand, it was kind of the mindset of my parents to say, “Oh, you don’t really processing. In particular, image need to study!” On the other hand, it registration. I’m supervising PhD students. I’m helping out professors to was me, saying, “Okay, I don’t really want to be in school too long.” I wasn’t look after them. put under pressure and I was, as I said, Why did you choose medical imaging? a bit shy with all these boys and me That was a chance choice. First, I went the only girl in that class at 13 years into industry, then I studied, and then I old, or whatever I was then. saw a post about medical image Does it ever still happen to be the only processing and I found it really woman in a meeting with all men? interesting. I went there and wanted to Yes, but I’m used to it now. This is just do a research position. It was only normal, and I have no problems with possible to do a PhD, so I did my PhD. Where was it? “You don’t buy confidence by running after That was at Kings College in London. others. You have to find it in yourself!” Where are you originally from? I’m from Munich, Germany. How is it to be a future scientist as a child in Munich? I never imagined I would be one. Growing up, my parents did not have any higher education. I was good in math, but I was a girl and I was a bit shy. So I didn’t go to the class where there were all boys and just me doing mathematics. Instead I did the business thing, and I couldn’t quite imagine going to school too long. I like mathematics and I did an apprenticeship as a computer assistant for Siemens. At Sierre-Zinal, one of the best Trail races Women in Science 23 Computer Vision News that at all. But when you’re young, it from the beginning, you don’t have to can be intimidating. do too much. They’re really great. How did you overcome that? Others are unfortunately sometimes lazy. I don’t mind students who are not I guess I grew up. Well, my Siemens so good, but they work hard and try. apprenticeship in software assistance was all women. So you grow up in this But if someone is lazy, I’m not very happy about that, because I think it’s environment where there are all really terrible that people don’t make women, and you’re okay working with men. It wasn’t so threatening then. the most out of what they are able to do. These were old men. They didn’t really When students arrive with a lot of mean anything, you know? I had lots of potential but not much self- female environments for two years in confidence, is it mostly females or this apprenticeship. males? What is the most important thing that It might be more the females. you’ve learned from your teachers? Can you advise a young female That’s a tricky question. student who does not feel very confident? Where can she buy Thank you. I’m here for that. [laughter] My teachers gave me confidence? valuable knowledge, but I cannot think No, you don’t buy confidence by of anything in particular now. running after others. You have to find it in yourself. This comes by going What is the best thing that you’ve through hard times, not giving up, learned from your students? Is there anything that impressed you? really sticking to the job. Of course, with some guidance. I mean, I’m here. I mean, I see how the PhD students I should be more experienced and be grow into becoming more confident. able to see the missing dots and get It’s not knowledge only, it’s also about along, so that next time, the student the things you need to sustain in order knows better how to progress and to go through a PhD. what to do if the same problem comes. Resilience? Maturity? Can you give one example of a female Yes. I really like to be on their side. You who started at a low level of get different characters. Every student confidence, and she found in herself comes with different properties and the force to progress? different abilities, and to pull out what they are good and to be able to turn their projects around, so that they really are blooming in this project. You like to help bring out the best in people. Yes. I like that. Do you see this evolution in your students every time? Students are quite different. For some, 24 Women in Science Computer Vision News

I have to go through my students. Maybe it’s just how I am. I’m more a Please do. positive person who is trying to get I can only say that there were some people with a positive aspect to something, instead of threatening times where there was crisis striking, things. Of course, if someone really and I think I could stand by her and help her go through it. underperforms and is lazy, I don’t like that at all, and I can be tough on them Did you also see some of them as well. quitting? You want to help people, but you None of the female students quit. You want them also to help themselves. need to encourage them more. They Absolutely. are not giving up, generally. I mean, by the time they’re PhD students, they What did you overcome? have gone very far in our field. The My main thing to overcome was in only ones which gave up were male. myself, to be able to not do what When you were a student, you saw everyone was expecting from me. other students giving up? What is the key to finding it? When I was a bachelor student, I was a To listen to yourself, to understand mature student. Of course, I knew what you’re about, to not get thrown what I wanted. I had made my choice around by the environment which tells to go there and do this. Now, young you what you should be doing. Stop students, they sometimes come out of watching television, that’s all fake. Of school and study for no good reason. I course, it’s not easy. We grow up with can see that they might give up, certain role models. because it wasn’t due to a purpose, That requires a very independent due to a real inner calling to go there. spirit. I’ve heard you mention several times Yes. the support that you give to your students. I guess this is a very, very But not everybody has this independence easily inside them. important drive for you. Yes, it is. It’s not easy. It was not easy. May I suspect that you’re trying to give more support to your students than you received when you were in their place? I received support when I was a PhD student, but it was maybe not quite as close a relationship. But I got good support. I can’t complain, really. Aren’t you trying to give more? I do it differently. Because you know how important it can be for the success of the student. Studying for PhD in London Women in Science 25 Computer Vision News

You are an independent spirit. How did If a man is able to share the burden, you become that? this career is feasible, even with kids? Yes. It took me longer than I wanted to. My career: I’m not a professor. It’s not But how did you become an possible with a husband who does not independent spirit? necessarily support you. But as a professor, I think you need more support. It came out. I mean, I had to break up. I was about to get married, have a Sharing the burden. house. I had to break up with my friend, Yes. and I had to leave home at a relatively old age and go my way. What did you do? I went and studied. “I would now be at home, having kids, being bored. No!” You didn’t marry him? No. Did you plan all this? He did not deserve you. My career wasn’t planned at all. I just No, he’s a good guy…. [laughs] He’s a took what came on me. Sometimes I good guy, but I would now be at home, didn’t maybe do bigger steps, but I feel having kids, being bored. No! happy with what I do. I like to be You have no kids? confident with what I do. I like to do it. I have no kids, by choice. I’m not made Maybe I could do more. for kids. I doubt it would be easy to do this job with kids. Yes, I have a husband. That’s enough. Is he the kid in the family? No, he isn’t. [Laughing] I’m joking. What if a younger female listens to this interview and says, “Okay, is Christine telling me not to marry and not to have kids? Is this incompatible with the career that I want?” That’s a tough question. Sorry, that’s what I’m here for. [I smile] Are you a happy woman, Christine? Yes, I know. It depends on the man Yes. these days, if they have grown to be I hope you’re happy forever. able to support women. Thank you. 26 Workshop: SWITCH Computer Vision News by Theo van Walsum

“A forum for interaction between clinicians and engineers is required for making

Keynote lecture by Kambiz Nael. progress The author of this report is standing on the left. in this field”

Stroke is a devastating disease, and a introduced the participants in the leading cause of death and serious imaging for stroke patients: Dr. long-term disability. Roland Wiest from the Inselspital in Whereas traditionally the treatment Bern introduced MR imaging for ischemic stroke is intravenous protocols for stroke patients, and administration of tPA to dissolve the their challenges. MR is a versatile occlusion, several recent studies have imaging technique, that permits the shown that mechanical assessment of various relevant thrombectomy is an effective imaging parameters. treatment for stroke patients. Patient Dr. Kambiz Nael similarly introduced selection and outcome prediction the standard CT imaging protocol, remain challenges, large studies are consisting of an NCCT, followed by a running to investigate the which contrast enhanced CT (either CTA, patients may benefit most from multiphase CTA or CT perfusion) for treatment. stroke patients. In these studies, imaging plays a Thirdly, Dr. Vitor Mendes Pereira crucial role. It is in this context that discussed mechanical thrombectomy. the first SWITCH workshop was Previous studies failed to organized at the MICCAI conference, demonstrate the effectiveness of with the main goal of bringing removing the occlusion with together clinicians and engineers, to endovascular devices, but recent discuss challenges and opportunities studies have changed this for the medical imaging community in dramatically, and consistently the management of stroke patients. demonstrate benefits of this To this end, three clinical experts treatment for stroke patients. Workshop: SWITCH 27 Computer Vision News

“The combination of the morning and afternoon session proved very positive…”

Keynote lecture by Roland Wiest

Next, four contributions of conclusions that: 1) a forum for researchers from the MICCAI interaction between clinicians and community were presented: one on engineers is required for making the segmentation of ventricles in progress in this field; 2) this workshop stroke patients, one on the effect of should thus be a recurrent event; and slice thickness in thrombus 3) the workshop organizers will quantifications in CT, and two on discuss how to shape a future event, quantifications of collaterals. The which may include a stroke-related latter is relevant, as collaterals are challenge. assumed to play an important role in The SWITCH workshop was followed keeping the infarct core small and by the ISLES workshop; the prolonging the time that a combination of the morning and thrombectomy may be effective. afternoon session proved very The workshop was finalized with an positive towards the design of future open discussion, with as main technical challenges.

“This workshop should be a recurrent event!”

Keynote lecture by Vitor Mendes Pereira 28 Workshop: BIVPCS Computer Vision News by João Manuel R. S. Tavares The main goal of the MICCAI Processing, Imaging, Visualization, workshop on Bio-Imaging and Biomechanics and Simulation. Visualization for Patient-Customized Hence, the workshop was an Simulations (BIVPCS), initiated in excellent opportunity for the MICCAI 2013, is to provide a platform participants to refine ideas for future for communications among work and to establish constructive specialists from complementary cooperation for new and improved fields such as signal and image solutions of imaging and visualization processing, mechanics, techniques and modeling methods computational vision, mathematics, towards more realistic and efficient physics, informatics, computer computer simulations. graphics, bio-medical-practice, psychology and industry. “New and improved In this 2017 edition of BIVPCS, 12 highly motivating works were orally solutions of imaging and presented, which promoted visualization techniques interesting discussions concerning different advanced techniques, and modeling methods methods and applications, and the towards more realistic exploring of the translational potential of the related technological and efficient computer fields; particularly of Signal simulations”

BIVPCS was an excellent discussion forum concerning Medical Image Analysis, Biomechanics and Computer Simulation. Workshop: BIVPCS 29 Computer Vision News

The CVII-STENT workshop at lunchtime! There were fantastic talks by Katharina Breininger (front row, second from left) and Guy Cloutier (front row, fourth from right).

Based on the review results and on workshop program committee for all the presentations given, the the support given, and the authors workshop organizers awarded the and the participants for the Best Paper prize to the work “Rapid workshop success. Prediction of Personalised Muscle As to the future of BIVPCS, the Mechanics: Integration with workshop organizers are preparing a Diffusion Tensor Imaging”, by J. Fernandez, K. Mithraratne, M. special issue of the Taylor & Francis “Computer Methods in Biomechanics Alipour, G. Handsfield, T. Besier and and Biomedical Engineering: Imaging J. Zhang. & Visualization”, journal devoted to The workshop organizers would like the workshop. to take this opportunity to acknowledgment the MICCAI society, They also intend to promote its the MICCAI 2017 conference edition again at MICCAI 2018 in organizers and the members of the Granada. “An excellent opportunity for the participants to refine ideas for future work and to establish constructive cooperation…” 30 Workshop: BrainLes Computer Vision News by Alessandro Crimi

BrainLes(s) MICCAI workshop was . Tal Arbel, Professor at McGill held at MICCAI 2017 in Quebec City, University, who gave a talk about offering an overview of medical image segmentation of various analysis advances in glioma, multiple neurodegenerative diseases, sclerosis (MS), stroke and trauma including MS and brain cancer. brain injuries (TBI). It was the third . Michel Bilello, Professor at the edition, and as usual we had University of Pennsylvania, who researchers from the medical image gave a clinical perspective of analysis domain, radiologists and computational neuro-oncology. neurologists, discussing the most . common brain diseases and traumas. Rivka Colen, Professor of Neuroradiology at the UT MD The event was held in conjunction with Anderson Cancer Center, who gave the challenges on Brain Tumor a talk focusing on how to merge Segmentation (BraTS), and White genetics and medical imaging to Matter Segmentation (WMH) to understand better brain tumors. complement the program as the focus . Jerry L. Prince, Professor at John on segmentation of those lesions in Hopkins University, discussing the medical imaging. new frontiers of brain lesion The keynote speakers of this year segmentation from medical were: imaging.

Spyros Bakas, Lead Organizer of the Brain Tumor Segmentation Challenge (BraTS) Workshop: BrainLes 31 Computer Vision News “We had researchers from the medical image analysis domain, radiologists and neurologists, discussing the most common Michel Bilello giving a clinical perspective brain diseases of computational neuro-oncology and traumas” Among the methods presented speeches - will be published in a during the workshop and challenges, volume edited by Springer. many techniques were using deep We hope to see you at the next learning. This is a further confirmation edition, and keep following us at the of the potential of deep learning in BrainLes website. segmentation of lesions. All participants joined the event with the attempt of comparing their techniques applied to one disease and If you want to learn more on the discussing whether they can be author of this report Alessandro applied them to other domains and Crimi, read what he told us last year diseases. Extended papers of the at MICCAI 2016 about his work on presented works - including keynote brain connectivity. 32 Challenge: WMH Computer Vision News by Hugo Kuijf

The challenge was Team sysu_media receives the 1st prize in the organized by the author WMH Segmentation Challenge for their U-Net of this report Hugo Kuijf approach. The team had the overall best score of the UMC Utrecht, the and the highest score on 3 of the 5 metrics. Netherlands

The WMH Segmentation Challenge Sixty 3T brain MR scans (T1 and successfully compared twenty FLAIR) with manual segmentations of participating methods for the WMH were provided for training of automatic segmentation of white automatic methods, originating from matter hyperintensities (WMH) of three sites (UMC Utrecht, NUH presumed vascular origin. WMH are Singapore, and VU Amsterdam) and well visible on brain MR images and three vendors (Philips, Siemens, GE). one of the main consequences of The secret test data consistent of 110 cerebral small vessel disease, playing scans, originating from five different a crucial role in stroke, dementia, and scanners (the three from the training; ageing. Quantification of WMH and additionally a 3T PET-MR and a volume, location, and shape is of key 1.5T scanner). All non-WMH importance in clinical research pathology was segmented as well studies, but visual rating has and ignored during evaluation. important limitations. “20 participating methods for the automatic segmentation of white matter hyperintensities (WMH) of presumed vascular origin” Challenge: WMH 33 Computer Vision News

“Key features of this challenge included the containerized approach where participants had to send in their method for evaluation; the absolute secrecy on the test results until after the session at MICCAI; and the large dataset Team cian receives the second prize with high quality in the WMH Segmentation Challenge images and manual for their MD-GRU approach. annotations provided.”

Unlike other challenges, participants second place to cian, third place to had to containerize their method with nlp_logix. The final ranking was done Docker and submit that for an with relative metrics, highlighting the objective evaluation by the organizers. small performance differences This guaranteed that the test set between the top-ranking teams. remained secret and the evaluation The session closed with an interactive results were not shared until the poster session. Since the final results challenge session at MICCAI. were unknown to the participants until During the well-attended session at the very last moment; the organizers MICCAI, each participating team briefly printed and pinned the results on the presented their method. After that, individual team posters. the evaluation results were revealed to Results are posted here. The challenge the participants and the audience. remains open for new and updated First place went to team sysu_media, submissions.

“1st place went to team sysu_media, 2nd place to cian, 3rd place to Team nlp_logix receives the third prize nlp_logix” in the WMH Segmentation Challenge. This team had the highest score for two out of the five metrics. 34 Workshop: SASHIMI Computer Vision News

“Most common methods presented are based on GAN - Generative Adversarial Network” SASHIMI: Simulation and Synthesis in reconstruct the images from both Medical Imaging, organized by domains. Sotirios A Tsaftaris, Ali Gooya, Two works used Cycle-GAN to map Alejandro F Frangi and Jerry L Prince. between different modalities. The first Most common methods presented are titled Adversarial Image Synthesis for based on Generative Adversarial Unpaired Multi-Modal Cardiac Data Network. by Agisilaos Chartsias et al. and the second titled Deep MR to CT Synthesis The workshop started off with an using Unpaired Data by Jelmer M. overview of Adversarial Domain Wolterink et al. Adaptation given by Hugo Larochelle, a research scientist at Google Brain. Another work that used GAN related One of the interesting works that were approach was titled Virtual PET Images mentioned in his talk was the Domain from CT Data Using Deep Separation Networks. In this method, Convolutional Networks: Initial they explicitly learn to extract image Results by Avi Ben Cohen et al., in this representations that are partitioned case it used a Conditional GAN into two subspaces: one component approach combined with a Fully which is private to each domain and Convolutional Network to achieve the one which is shared across domains. virtual PET images from CT images. The model is trained to not only Additional interesting works that were perform the task we care about in the presented at the SASHIMI workshop source domain, but also to use the can be found in the workshop's web partitioned representation to page.

Alejandro Frangi (standing at the right) during Avi Ben Cohen’s presentation about Virtual PET images from CT Data using Deep Convolutional Networks Challenge: ISLES 35 Computer Vision News by Mauricio Reyes This year the third edition of the richer training and testing dataset than Ischemic Stroke Lesion Segmentation for previous editions, which was (ISLES) gathered much interest from curated by an expert radiology team at the medical image computing Inselspital, University Hospital in Bern. community, with more than 150 pre- The competition this year resulted in a conference data access requests, and a highly competitive setup. All sixteen final set of 16 highly competitive teams participating teams presented participating on the on-site challenge. variations of deep learning networks. Originally conceived as a segmentation In terms of performance, the scores challenge for the analysis of acute and improved from last year, indicating sub-acute stroke lesions from multi- achieved progress, but are still sequence MRI, the ISLES challenge suboptimal for clinical exploitation. evolved in the last years to host a Indeed, the ISLES challenge has been much challenging task: forecast stroke mentioned by several competing lesion outcome. teams (also participating in other This holds much promise for clinicians, challenges) as one of the toughest as it can leverage and assist the challenges they have ever participated interventionalist in the difficult in. The complexity of the task relates in decision-making process needed to turn to the high complexity of stroke decide whether a mechanical recovery and brain blood circulation. thrombectomy is pertinent for a In this regard, the discussions that took patient. Participating teams received place during ISLES and the SWITCH then a combination of multisequence workshop were very fruitful, leading MRI and clinical information taken at the organizing team to pinpoint future the acute stroke state and were asked technical challenges focusing on to predict the stroke lesion outcome at leveraging the modeling of collateral three-month follow-up. flow for a better characterization of This year’s edition of ISLES featured a stroke lesion recovery. The ISLES 2017 Challenge: forecasting stroke lesion outcome from multisequence MRI and clinical information

The ISLES 2017 organizing team. From left to right: Mauricio Reyes, Roland Wiest, Arsany Hakim, Stefan Winzeck “…one of the toughest challenges…” 36 Presentation: Benjamin Hou Computer Vision News “Getting the entire picture of the full 3D anatomy goes beyond human abilities”

Benjamin Hou is a PhD Student at Imperial College London. Trained experts, like medical imaging Transformation predictions are scientists and radiologists, can generated relative to a canonical atlas mentally estimate the approximate coordinate space, which facilitates, for orientation of a randomly oriented 2D example, direct application of 3D atlas- slice through the body. But getting the based segmentation. The nature of entire picture of the full 3D anatomy GPU accelerated Deep Learning allows goes beyond human abilities. to make estimates within a few milliseconds per slice. With the recent advent of Deep Learning, we have developed a fully “make estimates within a few automatic CNN-based method to learn milliseconds per slice” this expert-intuition about slice In practice, there are many problems transformations. Our approach can which can benefit from such estimate the full 3D transformation of initialization-free 2D to 3D registration. randomly oriented 2D slices purely Two applications, that are featured in from the learned features in the this work, are motion correction for image. We can do this without fetal brain imaging and 2D to 3D initialization from, e.g. scanner registration with projective C-Arm coordinates. images.

Fetal MR brain image Fetal MR brain image State-of-the-art 3D motion 3D reconstruction Further (in-plane view). Many (out-of-plane view). Heavy compensation and after our CNN- registration overlapping stacks of slices motion corruption reconstruction from several based approach re- refinement using are acquired while the between individual slices overlapping, heavily aligned each slice iterative slice-to- fetus is moving. and stack acquisitions. motion corrupted, stacks of individually in volume slices very often fails. canonical atlas optimization space “…estimate the full 3D transformation of randomly oriented 2D slices purely from the learned features in the image” Spotlight News 37 Computer Vision News

Computer Vision News lists some of the great stories that we have just found somewhere else. We share them with you, adding a short comment. Enjoy! Toronto's early lead in artificial intelligence: UofT experts Don’t suspect this to be a self-promotional article by the University of Toronto! UofT is really being instrumental in transforming Canada into an early leader of the Artificial Intelligence revolution. Find out who did it and how, from Professor Emeritus Geoffrey Hinton to two dear friends of Computer Vision News: we are not a little proud to see associate professor Raquel Urtasun and assistant professor Sanja Fidler be recognized for their work. Read Now... This AI program can make 3D face models from a selfie AI experts from Kingston University and the University of Nottingham have trained a Convolutional Neural Network to convert two-dimensional images of faces into 3D. They fed the CNN tons of data on people’s faces and from there it figured out by itself how to guess what a new face looks like from a previously unseen pic, including parts that it can’t see in the photograph. Read More Quick and reliable 3D imaging of curvilinear nanostructures Nano-sized objects can be observed by transmission electron microscopy (TEM), generally limited to 2D images. Researchers from Ecole Polytechnique Federale de Lausanne (EPFL) are now able to overcome the need for hundreds of tilted views and sophisticated image processing to reconstruct their 3D shape. Their electron microscopy method obtains 3D images of complex curvilinear structures without tilting the sample. Read More A Smart Recycling Bin Could Sort Your Waste for You Computer vision can remove any confusion when disposing of different types of plastic. The algorithm learns to recognize images to identify the material held in front of its cameras and then tells the consumer exactly where waste should be placed. Read More Microsoft’s AI chiefwantsto launchachatbotin everybigcountry Remember Harry Shum? I interviewed him at CVPR. Now that Microsoft’s CEO wants to “democratize” AI, Shum wants to create machines that aren’t just smart but are also able to connect emotionally with us. Read More Another Microsoft news, as they launch new machine learning tools: Azure Machine Learning Experimentation service, Azure Machine Learning Workbench and Azure Machine Learning Model Management service. Read More 38 Research Computer Vision News Learning from Simulated and Unsupervised Images through Adversarial Training by Assaf Spanier Every month, Computer Vision News reviews a research paper from our field. This month we have chosen to review Learning from Simulated and Unsupervised Images through Adversarial Training. We are indebted to the authors from Apple (Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda Wang, Russ Webb) for allowing us to use their images to illustrate this review. Their work is here. “Learning to improve the realism of a simulator’s Aim: synthetic images using unlabeled real data” The need for large labeled training datasets is constantly growing, as ever-higher capacity deep neural networks continue to appear. Since labeling large datasets is time-consuming, expensive and uncertain, researchers consider using synthetic rather than real images for training, since they come already annotated. However, training a network on synthetic images is problematic, because of the difference in distributions between real image and the synthetic image, causes the network to learn qualities unique to synthetic images, leading

to poor performance when dealing with real images. Research Motivation: SimGAN is a network using Simulated+Unsupervised (S+U) learning, whose goal is learning to improve the realism of a simulator’s synthetic images using unlabeled real data. Improved realism will make it possible to train better machine learning models on large datasets without new data collection or the need for human annotation. Novelty: To overcome this difference, the authors propose simGAN, whose goal is learning a model to improve (refine) the realism of a simulator’s output using unlabeled real data, while preserving the annotation relevance of the images. This idea is demonstrated in the image below: Research 39 Computer Vision News Computer Vision News Background: Traditional GAN Networks train a Generator and a Discriminator network. During training the two networks train with competing losses the Generator’s goal is to map a random vector to a realistic image, and at the same time the Discriminator tunes its loss function to distinguish between the generated from the real images. Since the first GAN framework by Goodfellow et al. introduced in 2014, many improvements and applications have been presented the research community. Wang and Gupta use a Style GAN to generate natural indoor scenes. iGAN enables users to interactively produce photo-realistic image. CoGAN uses coupled GANs to learn a joint distribution of images from multiple modalities and many more. SimGAN is a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. Past research, such as Gaidon et al, shows that pre-training a deep neural network on synthetic data leads to improved performance. SimGAN complements these efforts, as images refined for improved realism will make pre-training on synthetic data that much more effective. The SimGAN approach is to make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a ‘self-regularization’ term; (ii) a local adversarial loss; and (iii) updating the discriminator using a history of refined images. Method:

The SimGAN has two components: the Refiner and the Discriminator. The Research Refiner minimizes the combination of an adversarial loss and a ‘regularization’ term whose aim is ‘fooling’ the Discriminator network. The Discriminator goal is to classify an image as real or refined. Next, we will look more closely at each of the two components.

Next, let’s look at SimGAN’s pseudocode, followed by more detailed description of the steps: 40 Research Computer Vision News

Input:

● 푥푖 ∈ 푋 Set of synthetic images , ● 푦푖 ∈ 푌Set of real images ● 푇max number of update steps

● 퐾푔 number of generative network update steps ● 퐾푑 number of discriminator network update steps Output:

● A Refiner ConvNet model, denoted by 푅휃 Initialize the Refiner 휙 and the Discriminator 휃 parameters.

For t = 1… T do // Refiner

For푘 = 1 . . . 퐾푔 do 1. Sample a mini-batch of synthetic images 푥푖 2. keep 휙 fixed and update 휃 by taking a SGD step on mini-batch loss 휁 휃 = − (푙표푔(1 − 퐷 (푅 (푥 ))) + 휆 휓(푅 (푥 )) − 휓(푥 ) 푅 푖 휙 휃 푖 휃 푖 푖 1

End Research

// Discriminator

For 푘 = 1 . . . 퐾푑 do 1. Sample a mini-batch of synthetic images 푥푖 2. Compute 푥 = 푅휃(푥푖 ) with current 휃 3. Keep 휃 fixed and update 휙by taking a SGD step on mini-batch loss

휁퐷(휙) = − 푖(푙표푔(퐷휙(푥푖)) − 푖(푙표푔(1 − 퐷휙(푦푖 ))

The Refiner (푅휃) is a ConvNet the aim of which is to make the synthetic images more realistic by bridging the difference between the distributions of synthetic and real images. Ideally, it should make it impossible to classify a given image as real or refined with high confidence. Let’s dive into the Refiner’s optimization function to see how each of its components is minimized as we near our goal:

휁 (휃) = − (푙표푔(1 − 퐷 (푅 (푥 ))) 휆 휓(푅 (푥 )) − 휓(푥 ) 푅 휙 휃 푖 휃 푖 푖 1 푖

퐷휙 is the probability of the image being The method uses휓, a feature-extraction function. synthetic, with 1 meaning the function is To preserve relevance of the annotation, self- certain the image is synthetic. As regularization loss is used to minimize the

퐷휙approaches 1, 1-퐷휙 approaches 0 and difference in image features between the refined minimizes the entire expression. and original synthetic image Research 41 Computer Vision News

The Discriminator (퐷휙) is a ConvNet whose goal is to correctly classify an image as real or refined synthetic, trying to overcome the Refiner’s attempts to fool it - to score every real image with probability 0, and every refined synthetic image with 1. Let’s dive into the Discriminator’s optimization function to see how each of its components is minimized as we near our goal:

휁퐷(휙) = − (푙표푔(퐷휙(푥푖)) − (푙표푔(1 − 퐷휙(푦푖 )) 푖 푖

퐷휙is the probability of the input 1 − 퐷휙 is the probability of the being a synthetic image input being a real image.

● 퐷휙 is a ConvNet whose last layer outputs the probability of the sample being a refined image ● The target labels for the cross entropy loss layer are 0 for every

푦푖 , and 1 for every 푥푖 . ● When the network successfully discriminated between the real

and the refined synthetic - both terms of 휁퐷(휙) are minimized.

Evaluation and Results:

Let’s look at results on two datasets: Research (a) The NYU hand pose dataset: The dataset is composed of 72,757 training frames and 8,251 testing frames captured by 3 cameras – one frontal and 2 side views, each depth frame is labeled with hand pose information. 42 Research Computer Vision News The qualitative results are illustrated in the figure above. The edge depth discontinuity is the main source of noise in real depth images, the refined simulated images show a good facsimile of this discontinuity.

Research Quantitative results for hand pose estimation: the plot shows cumulative curves as a function of distance from ground truth keypoint locations. SimGAN outperforms the model trained on real or synthetic images by 8.8%. Implementation details of the NYU hand pose dataset: The input image size is 224 × 224, filter size is 7 × 7, and 10 ResNet blocks are used. The discriminative net Dφ is: (1) Conv7x7, stride=4, feature maps=96, (2) Conv5x5, stride=2, feature maps=64, (3) MaxPool3x3, stride=2, (4) Conv3x3, stride=2, feature maps=32, (5) Conv1x1, stride=1, feature maps=32, (6) Conv1x1, stride=1, feature maps=2

(b) The UnityEyes Gaze estimation:

Gaze estimation is a key to many human computer interactions. The figure above shows examples of real, synthetic and refined images from the eye gaze dataset. SimGAN achieves a significant qualitative improvement of the synthetic images, by recreating real skin texture, iris qualities and sensor noise. Research 43 Computer Vision News

Quantitative results for appearance-based gaze estimation on the MPIIGaze dataset with real eye images. The plot shows cumulative curves as a function of degree error. 22.3% absolute percentage improvement was gained from training with the SimGAN output.

Conclusion: The key idea of SimGAN is to refine synthetic images so that they look like real images, and at the same time preserve their annotation relevance. SimGAN S+U learning method, using an adversarial network demonstrated state-of-the-art results without any labeled real data. In future, authors propose to research noise distribution to be able to create more than one refined image from each synthetic image, and to research refining videos as well as images. Source code: All source code of the SimGAN method can be found here: Packages required are: Python 2.7 Research TensorFlow 0.12.1 SciPy pillow tqdm After installing and downloading, refining a synthetic image with a pre-trained model is done by typing: $ python main.py --is_train=False --synthetic_image_dir= "./data/gaze/UnityEyes/" As for training, all images must be located in the samples directory. Then all you need is typing: $ python main.py $ tensorboard --logdir=logs --host=0.0.0.0

For training with different optimizer and regularization term, you can use the following command: $ python main.py --reg_scale=1.0 --optimizer=sgd 44 Project Management Tip Computer Vision News 7 Tips for Project Management in Computer Vision

RSIP Vision’s CEO Ron Soferman has launched a series of lectures to provide a robust yet simple overview of how to ensure that computer vision projects respect goals, budget and deadlines. This month we learn 7 Tips for Project Management in Computer Vision.

Involve a large team during

Management the brainstorming and at each crucial step in the algorithmic decisions When we run a project in software, it is step in the algorithmic decisions. always possible to control the 3. Matlab to C++: it is quite typical to development process and make it work start with a Proof of Concept in in the right direction. Agile and other Matlab. On the other hand, skills help us keep on track. Today I will give migration to C++ might prove to be you 7 additional tips, the specific target of a difficult task, because the which are computer vision projects. infrastructure of the functions in 1. Review 100 images: it’s all about the libraries are different, making visual results; thus, as a project the fine-tuning of the algorithm manager, you should not stop problematic. I recommend to start viewing the presentation after a porting some of the functions and few satisfactory results. Ask the some of the infrastructure to C++ developer for 100 images, so that environment as soon as it is you can get acquainted with the possible. real problem, the current results 4. Deep learning in different ways: and the challenges still to be there are many ways to implement solved. Diligent review is essential deep learning. When you want to for any computer vision project. achieve a definite progress in your 2. Team for new ideas: since project, you might decide to split computer vision is an eclectic the project in a few stages, taking science, borrowing from advantage of those stages in which mathematics, physics, statistics, deep learning gives you good signal processing, deep learning results (even if it doesn’t solve the and more, I recommend that you whole problem in one net); and involve a large team (with different then take advantage of the stable sources of ideas) during the and robust results to step into the brainstorming and at each crucial next stages of the problem solving. Project Management Tip 45 Computer Vision News

5. Limit you effort to the data set efficient communication. You size: in many cases, datasets for should keep it concise, relevant the first stages of the projects are and updated! This will enable you limited, until we get more data. I to solve the problems and get the advise not to spend excessive R&D right help at any stage of the resources as long as the dataset is project. small, lest we incur in overfitting 7. Do not correct algorithms with and waste of time solving more and more algorithmic levels: irrelevant problems. sometimes, especially when you Keep the Algorithmic are working with young teams, you need to make sure that each Design Document algorithmic addition has been concise, relevant validated and debugged correctly and updated! before you proceed to the next stage. This is necessary because 6. Algorithmic Design Document young programmers tend to add (ADD): this detailed document, additional stages without testing

describing all the algorithms, with and correcting the work already Management their assumptions, parameters, done. This is very risky, since it results and limitations, is essential might make the algorithmic part to keep the development group excessively complex and devoid of focused on the task and to ensure any mathematical background. Take advantage of the stable and robust results to step into the next stages of the problem solving

On September 11, Ron has lectured at the Boston Imaging and Vision (BIV) meetup. 46 Project Computer Vision News Deep Learning for Medical Images Registration by RSIP Vision Every month, Computer Vision News reviews a successful project. Our main purpose is to show how diverse image processing applications can be and how the different techniques contribute to solving technical challenges and physical difficulties. This month we review RSIP Vision’s method in Deep Learning for Medical Images Registration, based on advanced image processing algorithms designed to support surgeons in their task. RSIP Vision’s engineers can assist you in countless application

fields. Contact our consultants now! Project The problem of medical images regions of interest in an image. registration is a common problem in For example, cataract is the leading many medical applications where a cause of blindness in the world and patient moves during a procedure, cataract surgery is the most scans are taken at different times, commonly performed operation different modalities should be worldwide. A surgery can improve the combined etc. Specifically, at RSIP eyesight for patients suffering from Vision we concentrate on the use of cataract by removing the “clouding” of deep learning for registration in the eye lens and inserting an implant medical operations and surgeries. instead. In cataract surgery there is a Usually, deep learning can be used to need to follow the eyeball’s directly predict the transformation movements (which include rotations) parameters for registration using as well as the physician’s tools. In this regression networks. Alternatively, it case, there are two objects: 1) the can be used to estimate an retina which is used for the appropriate similarity measure registration; and 2) the physician’s between two images. tool which needs to be determined When considering runtime, running a but not used for registration. Thus, deep network for registration deep convolutional networks are purposes can be time consuming and trained to separate the region of classic approaches might be faster. interest for registration and other However, a less complex network can regions, as well as to identify the be used to help decide what parts of relevant tool during the surgery. the image are important. As deep Another similar example is retinal learning showed great success in surgeries which also include the retina detection, segmentation, and as the region of interest and the classification, we make use of deep physician’s tools as can be seen in the convolutional networks to localize the following figure: “Deep Learning can be used to directly predict the transformation parameters for registration using regression networks” Project 47 Computer Vision News

The network can be quite simple if it is concentrate on the regions of interest. Project trained to classify patches and by By creating an easy-to-use, generic UI down-sampling or using high stride we extract annotated images fast and values the runtime is improved significantly. This approach helped us train the relevant convolutional neural networks to get the regions of interest improve the registration results in and separate the different objects. different Retinal related projects. It can be used as a pre-processing step Then for each frame we find the relevant regions and mask out non- for classic Registration/Homography relevant regions for the registration methods which may use correlation, mutual information or other similarity process. measure according to the application. You can read on our website about It can also be combined with a deep other Deep Learning projects learning based registration, making it conducted by RSIP Vision in several easier on the registration network to fields of application. 48 We Tried for You Computer Vision News FirstAid on Google Cloud in 30 minutes! by Assaf Spanier

We Tried For You once again! This time, we will demonstrate how in under half an hour, you can set up a virtual machine on Google Cloud that will run

Tool and train the latest deep learning models for segmentation and classification specialized for handling medical images. The guide is divided in 2 parts: first, we’ll see how to install and configure a VM instance on Google Cloud. Then, we’ll install the FirstAid software package, which will make running and training the latest deep learning methods for medical imaging segmentation and classification nice and easy. Of course, you could install and run FirstAid on your own PC, but unless it has very strong computing capabilities and a GPU, you’re better off setting it up on Google Cloud, which you can easily do, as we’ll explain right away.

A. Setting up a virtual machine on Google Cloud. Google Cloud is a virtual machine service, which allows anyone to get a virtual machine for remote execution. When you set up your account you’re given $300 for initial experimentation. We can use a little of this gift to check out FirstAid’s capabilities.

We will guide you step-by-step how to set up your own virtual machines (called VM instances on Google Cloud): 1. Go to cloud.google.com and open up a account (or upgraded account if you want to use a GPU). 2. Once you’ve set up your account, in the menu select compute engine and in the menu that pops up select VM instance. 3. In the VM instances screen, at the top, select the ‘create instance’ button. In the form that opens: a. Give a name to your VM instance. b. In the machine type select 4 vCPUs.

c. Optionally, for better performance, if you set up an upgraded account: Press the ‘customize’ button on the right, and in the menu that opens, under GPUs select 1 GPU. We Tried for You 49 Computer Vision News

d. In the Boot disk click on ‘change’ and in the menu that opens

select Ubuntu 16.04. Tool

e. Under Firewall, select ‘allow http’ and ‘allow https’. f. Finally, press create. 4. Coming back to the VM-instance screen, pick the line of your just-created VM-instance. Press ‘SSH’ and in the menu that pops up, select ‘Open in browser window’. 5. In the console, install the following packages to run Python and TensorFlow: i. $ sudo apt-get update ii. $ sudo apt-get install python-pip iii. $ sudo pip install tensorflow iv. $ sudo apt-get install ipython v. $ sudo pip install --upgrade pip 6. Now we want to set up a VNC server, which will enable us to use a GUI interface with the VM instance. a. First, we’ll Install: i. sudo apt-get install gnome-core ii. sudo apt-get install gnome-core iii. sudo apt-get install vnc4server b. Now we want to configure the VNC server, running the vncserver command creates the configuration files we want to make changes to. c. Run vncserver (please note that you will need to select a password here). And kill it with a command killall Xvnc. 50 We Tried for You Computer Vision News

d. Now config the script by typing vi .vnc/xstartup, and change the file to the following:

#!/bin/sh # Uncomment the following two lines for normal desktop: # unset SESSION_MANAGER

# exec /etc/X11/xinit/xinitrc Tool

[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources xsetroot -solid grey vncconfig -iconic & x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & x-window-manager &

gnome-panel & gnome-settings-daemon & metacity & nautilus &

e. Now run the VNC server again by typing vncserver. f. One last thing to do before you can use your VM instance, in the Firewall settings you need to open the port for the VNC client to connect to your VM instance: i. In the VM instance window select the machine you created. ii. In Network interfaces click on ‘default’ under Network.

iii. And in the menu that opens, click on Add firewall rule

iv. You need to fill out 2 fields: Source filters, under ‘IP range’ enter 0.0.0.0/0 Protocols and Ports, enter tcp: 5901 We Tried for You 51

Computer Vision News Tool

d. Now config the script by typing vi .vnc/xstartup, and change the file to the following: 7. On your local machine download and install the VNC client from this URL a. In the window that opens enter the IP address of your VM instance. A window asking for a password will open, you need to use the password you selected when you first ran the VNC server.

B. Now you’re all set to start with the FirstAid package: The FirstAid package was created for work with deep neural network, especially tuned for medical imaging. First, we will guide through the setup process. Then, we’ll go through the process of using FirstAid’s demo. For FirstAid Installation, please type the following in the command windows: a. cd ~ b. sudo pip install h5py c. git clone https://github.com/yidarvin/FirstAid.git d. sudo pip install matplotlib e. sudo pip install scipy f. sudo pip install sklearn g. sudo apt-get install python-tk The package comes with the following 6 common deep learning segmentation and classification networks: Le_Net: 5 layers (4 for segmentation) Alex_Net: 8 layers (6 for segmentation) VGG_Net: 11, 13, 16, and 19 layer versions (9, 11, 14, and 17 for segmentation, respectively) GoogLe_Net: 22 layers (22 for segmentation) InceptionV3_Net: 48 layers (48 for segmentation) Res_Net: 152 layers (152 for segmentation) FirstAid supplies the following 2 main scripts, the first for training classification networks, the second - segmentation networks. The python scripts to call are: train_CNNclassification.py, train_CNNsegmentation.py respectively. 52 We Tried for You Computer Vision News

Medical images has a special structure which is the centrality of the patient. Various images all supply partial information which we want to aggregate to arrive at a single diagnosis for the patient. The patient-centric structure has the following format:. The main image stored under the key "data", the segmentation under "seg," please note that the input image must be a square. FirstAid supplies a script (img3h5.py) which converts the images into patient-

centric structure (h5 format). Tool

folder_data_0 (e.g. training folder)

● folder_patient_0 ○ h5_img_0 ■ data: (3d.array of type np.float32) 2d image (of any channels) of size $n \times n$ ■ label: (int of type np.int64) target label ■ seg: (2d.array of type np.int64) target n-ary image for segmentation ■ name: (string) some identifier ■ height: (int) original height of the image ■ width: (int) original width of the image ■ depth: (int) original depth (number of channels) of the image ○ h5_img_1 ■ etc... ○ etc... ● folder_patient_1 ○ h5_img_0 ● Etc… Now, let’s look at FirstAid’s Demo:

The demo of FirstAid it a toy example classifies faces into “with glasses” and “no glasses” categories. To download the demo type the following command in the console: git clone https://github.com/yidarvin/glasses.git

For running the demo please update the glasses.sh script according to your local libraries, as you can see below: We Tried for You 53 Computer Vision News

declare -r path_FirstAid=~/FirstAid/train_CNNclassification.py

declare -r path_train=$PWD/h5_data/training declare -r path_val=$PWD/h5_data/testing

declare -r name=glasses

declare -r path_model=$PWD/model_state/$name.ckpt Tool declare -r path_log=$PWD/logs/$name.txt declare -r path_vis=$PWD/graphs

python $path_FirstAid --pTrain $path_train --net Alex --pVal $path_val --name $name --pModel $path_model --pLog $path_log --pVis $path_vis --nGPU 1 --bs 8 --ep 50 --nClass 2 --lr 0.001 --do 0.5

The last line in the script calling from the CNN classification. There are four main filepaths of importance: ● --pTrain training: used for training the model ● --pVal validation: used to help dictate when to save the model during training ● --pTest: testing: held out test set with associated ground truth data for grading ● --pInf: inference: images without ground truth to use the model on ● --pModel: model savepath for saving and loading ● --pLog: log filepath ● --pVis: figure saving filepath

Network Specific Definitions ● --name: name of experiment (will be used as root name for all the saving) default: 'noname' ● --net: name of network default: 'GoogLe' ○ valid: Le, Alex, VGG11, VGG13, VGG16, VGG19, GoogLe, Inception, Res ● --nClass: number of classes to predict default: 2 ● --nGPU: number of gpu's to spread training over (testing will only use 1 gpu) default: 1 54 We Tried for You Computer Vision News

Hyperparameters ● --lr: learning rate default: 0.001 ● --do: keep probability for dropout default: 0.5 ● --l2: L2 Regularization default 0.0000001

Tool ● --l1: L1 Regularization default 0.0 ● --bs: Batch Size (1/nGPU of this will be sent to each gpu) default: 12 ● --ep: Maximum number of epochs to run default: 10

The output of this run should look on your screen as the following figure:

This is how, in under half an hour, you can set up a virtual machine on Google Cloud that will run and train the latest deep learning models for segmentation and classification specialized for handling medical images Upcoming Events 55 Computer Vision Vision News News

SIPAIM - Int.Symp. on MedicalInformationProcessingand Analysis San Andres - Colombia Oct 5-7 Website and Registration RE•WORK Deep Learning Summit Montreal, Canada Oct 10-11 Website and Registration Vipimage on Computational Vision and Medical Image Proc. Porto, Portugal Oct 18-20 Website and Registration ICCV 2017 FREE Venezia, Italy Oct 22-29 Website and Registration SUBSCRIPTION EMMCVPR 2017 - Energy Minimization in Computer Vision Venezia, Italy Oct 30-Nov 1 Website and Registration Dear reader, AAO 2017 - American Academy of Ophthalmology Do you enjoy reading New Orleans, LA Nov 11-14 Website and Registration Computer Vision News? Would you like to receive it AI Expo North America - Delivering AI for a Smarter Future for free in your mailbox Santa Clara, CA Nov 29-30 Website and Registration every month? PReMI 2017 - Pattern Recognition and Machine Intelligence Kolkata, India Dec. 5-8 Website and Registration Subscription Form (click here, it’s free) International Congress on OCT Angiography and Advances in OCT Roma, Italy Dec 15-16 Website and Registration You will fill the Subscription Did we miss an event? Form in less than 1 minute. Tell us: [email protected] Join many others computer vision professionals and receive all issues of Next month your will read a Computer Vision News as preview of AI Expo North America soon as we publish them. You can also read Computer in Santa Clara, Nov 29-30 Vision News on our website and find in our archive new and old issues as well. FEEDBACK Dear reader, How do you like Computer Vision News? Did you enjoy reading it? Give us feedback here:

Give us feedback, please (click here)

It will take you only 2 minutes to fill and it will help We hate SPAM and us give the computer vision community the great promise to keep your email magazine it deserves! address safe, always. Improve your vision with

The only magazine covering all the fields of the computer vision and image processing industry

RE•WORK Subscribe

(click here, it’s free) Mapillary

A publication by Gauss Surgical