MICAD2021 The 2nd International Conference on

Medical Imaging and Computer-Aided Diagnosis

March 25-26, 2021 | Webiner

Welcome Messages

Dear colleagues,

It is our great pleasure and privilege to welcome you to the virtual edition of MICAD2021, the 2nd International Conference on Medical Imaging and Computer-Aided Diagnosis. The conference will be held from March 25th to 26th, 2021 and is now accessible to registered participants worldwide.

The annual MICAD conference attracts world leading biomedical scientists, engineers, and clinicians from a wide range of disciplines associated with medical imaging and computer assisted diagnosis.

Submitted papers will be peer reviewed by conference committees, the accepted papers that presented at the conference will be included into MICAD2021 conference proceedings, and be published with Springer LNEE. The program will features 4 focused oral sessions, with speakers providing perspectives on related fields, both academic and commercial.

We would like to thank and welcome everyone, and hope you will enjoy MICAD2021.

Supporting Academic Organizations Media Partners

Content

Committee ...... 3 Time Schedule ( Time, GMT+0) ...... 5 Keynote Speakers (Ordered by Last Name)...... 8 Abstracts (in chronological order) ...... 14 Keynote Session 1 ...... 14 Keynote Session 2 ...... 15 Oral Session 1: Computer-Aided Detection/Diagnosis...... 16 Oral Session 2: Automated Medical Image Analysis ...... 18 Oral Session 3: Medical Image Segmentation, Registration and Reconstruction ...... 20 Oral Session 4: Machine learning and Deep learning ...... 22 Keynote Session 3 ...... 25 Keynote Session 4 ...... 26

Committee Conference Chair

 Dr. Ruidan Su, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China

General Co-chair

 Prof. Yu-dong Zhang, University of Leicester, UK

Program Chairs

 Prof. Alejandro F Frangi, University of Leeds, UK  Prof. Joseph M. Reinhardt, The University of Iowa, IA, USA  Dr. Han Liu, University, China  Prof. Ryuji Hamamoto, Representative Director, Japanese Association for Medical Artificial Intelligence (JMAI), Japan

Technical Program Committee

 Prof. Zakaria Belhachmi, Université Haute-Alsace, France  Prof. Qiang (Shawn) Cheng, University of Kentucky, USA  Prof. Sourav Dhar, Sikkim Manipal University, India  Dr. Jan Ehrhardt, Institute for Medical Informatics, University of Lübeck, Germany  Prof. Smain FEMMAM, IEEE senior member, University of Haute-Alsace France, France  Dr. Linlin Gao, Ningbo University, China  Dr. Maroun Geryes, Lebanese University, Lebanon  Prof. Yuzhu Guo, Beihang University, China  Prof. Zhiwei Huang, National University of Singapore, Singapore  Dr. Yuankai Huo, Vanderbilt University, USA  Dr. Sujatha Krishnamoorthy, Wenzhou Kean University, China  Dr. Yuan Liang, University of California, Los Angeles, USA  Dr. Cheng Lu, Case Western Reserve University, USA  Dr. Na Ma, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China  Dr. Mahsa Mohaghegh, Auckland University of Technology, New Zealand  Prof. Xiang Pan, Jiangnan University, China  Dr. Luca Parisi, Coventry University, UK  Dr. Sivarama Krishnan Rajaraman, Lister Hill National Center for Biomedical Communications (LHNCBC), National Library of (NLM), National Institutes of Health (NIH), India  Prof. Su RUAN, LITIS laboratory, University of Rouen, France  Dr. Francesco Rundo, STMicroelectronics s.r.l., Catania, Italy  Dr. Rachel Sparks, King’s College London, UK  Dr. Vinesh Sukumar, University of Idaho, United States  Dr. Gunasekar Thangarasu, Linton University College, Malaysia  Dr. Gennaro Vessio, University of Bari, Italy  Dr. Jichuan Xiong, Nanjing University of Science and Technology, China  Dr. Lequan Yu, Stanford University, USA  Dr. Yitian Zhao, iMED China group at Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, China  Dr. Yuyao Zhang, ShanghaiTech University, China  Dr. Jun Zhuang, Indiana University-Purdue University at Indianapolis (IUPUI), USA

Time Schedule (London Time, GMT+0) March 25th

07:55-08:00 Opening Speech 08:00-11:30 Keynote Session Chair: Prof. Alejandro F Frangi From to In silico trials of medical devices 08:00-08:30 Prof. Alejandro F Frangi | University of Leeds, UK Accelerating Deep Learning Medical Image Analysis in Radiology 08:30-09:00 Prof. Leo Joskowicz | The Hebrew University of Jerusalem, Israel Artificial Intelligence in Bioimage Analysis 09:00-09:30 Prof. Erik Meijering | University of New South Wales, Australia Chair: Prof. Yudong Zhang Large cohort analysis in medical image processing 09:30-10:00 Prof. Robin Strand | Uppsala University, Sweden Safe instrument detection during surgery 10:00-10:30 Prof. Raphael Sznitman | University of Bern, Switzerland Deep Learning Solutions for Real World Healthcare Applications 10:30-11:00 Dr. Ayelet Akselrod-Ballin | Zebra Medical Vision Ltd, Israel Oral Session 1 11:00-12:00 Computer-Aided Detection/Diagnosis Chair: Promoting Cardiovascular Health Using a Recommendation System Paper ID: 7 Orlando Belo | University of Minho, Portugal Information Technologies in Complex Reconstructive Maxillofacial Surgery 11 Mikhail Mikhailovich Novikov | Institute on Laser and Information Technologies of RAS, Russia Machine Learning-based Imaging in Connected Vehicles Environment 16 Sayon Karmakar | University of Arkansas at Little Rock, USA Predicting Neurostimulation Responsiveness with Dynamic Brain Network Measures 43 Jinwei Lang | Hefei Institutes of Physical Science, Chinese Academy of Sciences, China Data augmentation for breast cancer mass segmentation 46 Jailin Clément | GE Healthcare, France 12:00-12:30 Break Oral Session 2 12:30-13:30 Automated Medical Image Analysis Chair: Asmaa Haja Unsharp Masking with Local Adaptive Contrast Enhancement of Paper ID: 14 Medical Images Ivo Draganov | Technical University of Sofia, Bulgaria A fully automated end-to-end process for fluorescence microscopy images of yeast cells: From segmentation to detection and 27 classification Asmaa Haja | University of Groningen, Netherlands Quantification of Epicardial Adipose Tissue in Low-Dose Computed 51 Tomography Images Goncharov Mikhail | Skolkovo Institute of Science and Technology, Russia A new content based image retrieval system for SARS-CoV-2 33 computer-aided diagnosis Marcelo Mendoza | Universidad Técnica Federico Santa María, Chile Geometrically Matched Multi-source Microscopic Image Synthesis 40 Using Bidirectional Adversarial Networks Dali Wang | Univeristy of Tennessee, USA Covid-19 Chest CT Scan Image Classification Using LCKSVD and 20 Frozen Sparse Coding Kaveen Liyanage | Montana State University, USA

March 26th

Oral Session 3 07:00-08:00 Medical Image Segmentation, Registration and Reconstruction Chair: Nachwa Aboubakr The Art-of-Hyper-Parameter Optimization with Desirable Feature Paper ID: 36 Selection Priynka Sharma | University of the South Pacific, Fiji A Dual supervision guided attentional network for multimodal MR 4 brain tumor segmentation Tongxue Zhou | Université de Rouen Normastic, France Three-dimensional image reconstruction of murine heart using image 10 processing Haowei Zhong | South China Agricultural University, China Glioblastoma Multiforme Patient Survival Prediction 34 Snehal Rajput RamAchal Singh | Pandit Deendayal Petroleum University, India Color-based Fusion of MRI Modalities for Brain Tumor Segmentation 50 Nachwa Aboubakr | University Grenoble Alpes, France Oral Session 4 08:00-10:00 Machine learning and Deep learning Chair: Dimitris Glotsos 2Be3-Net: Combining 2D and 3D convolutional neural networks for Paper ID: 12 3D PET scans predictions Ronan Thomas | EURA NOVA FRANCE, France A Hybrid Deep Model for Brain Tumor Classification 21 Hamail Ayaz | Institue of Technology Sligo, Ireland A Systematic Literature Review of Machine Learning Applications for 28 Community-Acquired Pneumonia Daniel Lozano-Rojas | University of Leicester, UK Photograph to X-ray image translation for anatomical mouse 31 mapping in preclinical nuclear molecular imaging Dimitris Glotsos | University of West Attica, Greece Active strain-statistical models for reconstructing multidimensional 32 images of lung tissue lesions Ekaterina Guryanova | Russian Technological University (MIREA), Russia Dysplasia grading of colorectal polyps through convolutional neural 44 network analysis of whole slide images Daniele Perlo | University of Torino, Italy Evaluating mobile tele-radiology performance for the task of MD201 analyzing lung lesions on CT images OMER KAYA | Cukurova University, Turkey Learning Transferable Features for Diagnosis of Breast Cancer from MD202 Histopathological Images Maisun Mohamed Al Zorgani | University of Bradford, UK Deep YOLO-Based Detection of Breast Cancer Mitotic-Cells in MD205 Histopathological Images Maisun Mohamed Al Zorgani | University of Bradford, UK 10:00-11:00 Break 11:00-14:00 Keynote Session Chair: Future of AI-assisted endoscopic procedure 11:00-11:30 Prof. Kensaku Mori | University, Japan Saving lives at Viz.ai 11:30-12:00 Dr. David Golan | Viz.ai, Israel Neuroimage Analysis in Autism: from Model-Based Estimation to 12:00-12:30 Data-driven Learning Prof. James Duncan | Yale University, USA Chair: Joseph M. Reinhardt Multi-phase CT Imaging+AI Enabled Deep Precision Medicine Solutions for Pancreatic Cancer: Multi-institutional Screening, 12:30-13:00 Precision Diagnosis and Prognosis Dr. Le Lu | PAII Inc., Bethesda Research lab, USA Lung Imaging and Machine Learning for Chronic Obstructive 13:00-13:30 Pulmonary Disease Prof. Joseph M. Reinhardt | The University of Iowa, USA AI for Chest Radiographs: Passing the Turing Test 13:30-14:00 Dr. Tanveer Syeda-Mahmood | IBM, USA

Keynote Speakers (Ordered by Last Name)

Dr. Ayelet Akselrod-Ballin

VP Research, Zebra Medical Vision Ltd

Ayelet Akselrod-Ballin’s is the Vice President of Research at Zebra Medical Vision where she leads a group of AI researchers. With over 20 years of experience both in Academia and Industry focusing on novel technologies for computer vision, machine learning, deep learning, and natural language processes applied to healthcare. Ayelet did her Post-doctoral research as a fellow in the Computational Radiology Laboratory at Harvard Medical School, Children’s Hospital () and she holds a Ph.D. in Applied Mathematics and from Weizmann Institute of Science. Prior to joining Zebra Medical Vision, Ayelet led the medical imaging research technology at IBM-research and led the Computer Vision & Algorithms team at the MOD.

Prof. James Duncan

Yale University, USA IEEE Fellow; AIMBE Fellow; MICCAI Fellow

James S. Duncan is the Ebenezer K. Hunt Professor of Biomedical Engineering and a Professor of Radiology & Biomedical Engineering, Electrical Engineering and Statistics & Data Science at Yale University. Dr. Duncan received his B.S.E.E. with honors from Lafayette College (1973), and his M.S. (1975) and Ph.D. (1982) both in Electrical Engineering from the University of California, Los Angeles. Dr. Duncan has been a Professor of Diagnostic Radiology and Electrical Engineering at Yale University since 1983. He has been a Professor of Biomedical Engineering at Yale University since 2003, and the Ebenezer K. Hunt Professor of Biomedical Engineering at Yale University since 2007. He has served as the Acting Chair and is currently Director of Undergraduate Studies for Biomedical Engineering. Dr. Duncan’s research efforts have been in the areas of computer vision, image processing, and medical imaging, with an emphasis on biomedical image analysis and image-based machine learning. He has published over 280 peer- reviewed articles in these areas and has been the principal investigator on a number of peer reviewed grants from both the National Institutes of Health and the National Science Foundation over the past 30 years. He is a Life Fellow of the Institute of Electrical and Electronic Engineers (IEEE), and a Fellow of the American Institute for Medical and Biological Engineering (AIMBE) and of the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society. In 2014 he was elected to the Connecticut Academy of Science & Engineering. He has served as co-Editor-in-Chief of Medical Image Analysis, as an Associate Editor of IEEE Transactions on Medical Imaging, and on the editorial boards of Pattern Analysis and Applications, the Journal of Mathematical Imaging and Vision, “Modeling in Physiology” of The American Physiological Society and the Proceedings of the IEEE. He is a past President of the MICCAI Society. In 2012, he was elected to the Council of Distinguished Investigators, Academy of Radiology Research and in 2017 received the “Enduring Impact Award” from the MICCAI Society Prof. Dr. Alejandro F Frangi

University of Leeds, UK IEEE Fellow, SPIE Fellow Diamond Jubilee Chair in Computational Medicine Royal Academy of Engineering Chair in Emerging Technologies Professor Frangi is Diamond Jubilee Chair in Computational Medicine at the University of Leeds, Leeds, UK, with joint appointments at the School of Computing and the School of Medicine. He leads the CISTIB Center for Computational Imaging and Simulation Technologies in Biomedicine. He has been awarded a Royal Academy of Engineering Chair in Emerging Technologies (2019- 2029). Professor Frangi has edited several books, published 7 editorial articles and over 215 journal papers in key international journals of his research field and more than over 200 book chapters and international conference papers with an h-index 55 and over 20,700 citations according to Google Scholar. He has been three times Guest Editor of special issues of IEEE Trans Med Imaging, one on IEEE Trans Biomed Eng, and one of Medical Image Analysis journal. He was chair of the 3rd International Conference on Functional Imaging and Modelling of the Heart (FIMH05) held in Barcelona in June 2005, Publications Chair of the IEEE International Symposium in Biomedical Imaging (ISBI 2006), Programme Committee Member of various editions of the Intl Conf on Medical Image Computing and Computer Assisted Interventions (MICCAI) (Brisbane, AU, 2007; CN, 2010; CA 2011; Nice FR 2012; Nagoya JP 2013), International Liaison of ISBI 2009, Tutorials Co-Chair of MICCAI 2010, and Program Co- chair of MICCAI 2015. He was also General Chair for ISBI 2012 held in Barcelona. He is the General Chair of MICCAI 2018 held in Granada, Spain.

Dr. David Golan

co-founder and the CTO of Viz.ai

David Golan is a co-founder and the CTO of Viz.ai - a digital healthcare company harnessing deep learning to analyze medical data and improve clinical workflow. Viz.ai developed the First ever FDA-approved AI-powered triage system for stroke. Prior to founding Viz.ai, David was a Fulbright post-doctoral scholar at Stanford university, working on leveraging deep learning for the analysis of medical imaging and genetic data. David holds a PhD in Statistics and Machine learning from Tel-Aviv University, and have coauthored more than 20 scientific papers including three publications in the journal Science. Prior to his academic career, David founded the ML team of b-hive Networks, an Israeli startup which was acquired by VMWare in 2008.

Prof. Leo Joskowicz

The Hebrew University of Jerusalem, Israel IEEE Fellow, ASME Fellow; MICCAI Fellow President of the MICCAI Society -- Medical Image Processing and Computer Aided Interventions Leo Joskowicz is a Professor at the School of Computer Science and Engineering at the Hebrew University of Jerusalem, Israel since 1995. He is the founder and director of the Computer-Aided Surgery and Medical Image Processing Laboratory (CASMIP Lab). Prof. Joskowicz is a Fellow of the IEEE, ASME, and MICCAI (Medical Image Processing and Computer Aided Intervention) Societies. He is the President of the MICCAI Society and was the Secretary General of the International Society of Computer Aided Orthopaedic Surgery (CAOS) and the International Society for Computer Assisted Surgery (ISCAS). He is the recipient of the 2010 Maurice E. Muller Award for Excellence in Computer Assisted Surgery by the International Society of Computer Aided Orthopaedic Surgery and the 2007 Kaye Innovation Award. He has published over 250 technical works including conference and journal papers, book chapters, and editorials and has 12 issued patents. He is on the Editorial Boards of six journals, including Medical Image Analysis, Int. J. of Computer Aided Surgery, Computer Aided Surgery, and Nature Scientific Reports and has served on numerous related program committees.

Dr. Le Lu (吕乐)

Executive Director at PAII Inc., Bethesda Research lab, Maryland, USA IEEE Fellow Le Lu received an MSE in 2004 and a PhD in 2007 in Computer Science from Johns Hopkins University. Prior to and during his PhD, he completed two year- long internships at Microsoft Research. In 2006, he joined Siemens Corporate Research at Princeton, New Jersey as a research scientist. He eventually served both as a member of the Medical Solutions’ computer-aided diagnosis & Therapy group and as a senior staff scientist, until 2013. During his seven years at Siemens, he made significant contributions to the company’s CT colonography and Lung CAD product lines. From 2013 to 2017, Dr. Lu served as a staff scientist in the Radiology and Imaging Sciences department of the National Institutes of Health Clinical Center. He then went on to found Nvidia’s medical image analysis group, in which he held the position of senior research manager until June 2018. Since then, he has been the Executive Director at PAII Inc., Bethesda Research lab, Maryland, USA. Dr. Lu's research interests lie in medical image computing/analysis, statistical/deep learning, clinical informatics and novel imaging biomarkers in the areas of oncology, radiology, and discovery of cancer treatment solutions. He has published over 176 peer-reviewed journal/conference articles, 35 peer-reviewed clinical abstracts and 54 US/International patents, including 32 MICCAI papers, two of which have received the MICCAI Society Young Scientist Award runner-up (Harrison 2017) and Publication Impact Award (Roth 2018). Additionally, he was the main technical leader for two of the most-impactful public radiology image dataset releases (NIH ChestXray14, NIH Clinical Center Director’s Award; NIH DeepLesion 2018). He coauthored the highest-cited IEEE Trans. on Medical Imaging article and the highest-cited medical imaging CVPR paper in the last five years, with his collaborating postdoc fellows (Hoo-Chang Shin 2016, Xiaosong Wang 2017). Two of his publications received the RSNA Informatics Research Trainee Awards (Xiaosong Wang 2016, Ke Yan 2018). In addition to his extensive research and publication activities, Dr. Lu plays an active role in the leading societies of the computer vision and medical imaging fields. He is a long-standing member of the MICCAI Society, elevated IEEE Fellow class of 2021 for my contributions to machine learning for cancer detection and diagnosis, member of IEEE Signal Processing Society, and member of the IEEE Computer Society. He serves as an Associate Editor of IEEE Trans. on Pattern Analysis and Machine Intelligence, IEEE Signal Processing Journal. In 2017 and 2019, he co-edited two books on Deep Learning and Convolutional Neural Networks for Medical Image Computing by Springer-Nature. Dr. Lu was Area Chair for MICCAI in 2015, 2016, 2018 (having participated in most MICCAI conferences since 2011); IEEE CVPR in 2017, 2019, 2020, and 2021; and AAAI in 2019 and 2020, and won best reviewer awards at CVPR 2018, BMVC 2018 and NeurIPS 2020. Dr. Tanveer Syeda-Mahmood

IBM Fellow, Chief Scientist, Medical Sieve Radiology Grand Challenge Almaden Research Center, IBM IEEE Fellow, AIMBE Fellow Dr. Tanveer Syeda-Mahmood is an IBM Fellow and Chief Scientist/overall lead for the Medical Sieve Radiology Grand Challenge project in IBM Research, Almaden. Medical Sieve is an exploratory research project with global participation from many IBM Research Labs around the world including Almaden Labs in San Jose, CA, Haifa Research Labs in Israel and Melbourne Research Lab in Australia. The goal of this project is to develop automated radiology and cardiology assistants of the future that help clinicians in their decision making. Currently, she is working on applications of content-based retrieval in healthcare and medical imaging. Over the past 30 years, her research interests have been in a variety of areas relating to artificial intelligence including computer vision, image and video databases, medical image analysis, bioinformatics, signal processing, document analysis, and distributed computing frameworks. She has over 200 refereed publications and over 80 patent filed. Dr. Syeda-Mahmood will be the General Chair of MICCAI 2023, the premier conference in medical imaging. She was the General Chair of the First IEEE International Conference on Healthcare Informatics, Imaging,and Systems Biology, San Jose, CA 2011. She was also the program co-chair of CVPR 2008. Dr. Syeda-Mahmood is a Fellow of IEEE. She is also the first IBMer to become an AIMBE Fellow. She is also a member of IBM Academy of Technology. Dr. Syeda-Mahmood was declared Master Inventor in 2011 and in 2019. She is the recipient of key awards including IBM Corporate Award 2015, Best of IBM Award 2015, 2016 and several outstanding innovation awards.

Prof. Dr. Erik Meijering

University of New South Wales, Australia IEEE Fellow Erik Meijering is a Professor of Biomedical Image Computing at the University of New South Wales (UNSW) in Sydney, Australia. His research interests are in Computer Vision and Artificial Intelligence for Quantitative Biomedical Image Analysis, on which he has published more than 100 papers. He received his PhD degree in Medical Image Analysis from Utrecht University in 2000 and the MSc degree in Electrical Engineering from Delft University of Technology in 1996, both in the Netherlands. He is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and serves on the IEEE SPS Technical Committee on Bio Imaging and Signal Processing (BISP), the IEEE EMBS Technical Committee on Biomedical Imaging and Image Processing (BIIP), and the cross-Society IEEE Life Sciences Technical Community (LSTC). Over the years he was/is an Associate Editor for the IEEE Transactions on Medical Imaging (since 2004), the International Journal on Biomedical Imaging (2006-2009), the IEEE Transactions on Image Processing (2008-2011), has co-edited various journal special issues and co-organized conferences in the field, notably the IEEE International Symposium on Biomedical Imaging (ISBI) and the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). He also served/serves on a great variety of other international conference, advisory, and review boards. Prof. Kensaku Mori

Nagoya University, Japan MICCAI Fellow Kensaku Mori received the B.Eng. degree in electronics engineering, and the M.Eng. and Ph.D. degrees in information engineering from Nagoya University, Nagoya, Japan, in1992, 1994, and 1996, respectively. He is a Professor with the Graduate School of Informatics, Nagoya University, and the Director of Information Technology Center of Nagoya University and an MICCAI Fellow.,Dr. Mori is currently involved in many international conference organizations, including SPIE Medical Imaging, CARS, IPCAI and MICCAI, as a General Chair or program committee members. He is a Member of IEEE, SPIE, ISCAS, IEICE, JSCAS, JSMBE, and JAMIT. He was the recipient of many awards including Young Scientist Award from the Minister of Education, Culture, Sports, Science and Technology, and RSNA Magna Cum Laude. (Based on document published on 9 July 2018).

Prof. Dr. Joseph M. Reinhardt

The University of Iowa, IA, USA IEEE Fellow, AIMBE Fellow, Image Analysis Group Leader Iowa Institute for Biomedical Imaging Co-Founder, VIDA Diagnostics, Inc., Iowa City, IA Joseph M. Reinhardt is the Roy J. Carver Chair of Biomedical Engineering at the University of Iowa. He received the BS degree from Carnegie Mellon University, the MS degree from Northeastern University, and the PhD degree from Penn State University, all in Electrical Engineering. Dr. Reinhardt worked for several years in industry as a radar systems engineer. He is currently Professor and Department Executive Officer (chair) of the Roy J. Carver Department of Biomedical Engineering. Dr. Reinhardt teaches courses in the areas of computer programming, biomedical instrumentation, and medical imaging. Dr. Reinhardt is a fellow of the Institute of Electrical and Electronic Engineers (IEEE) and a fellow of the American Institute of Medical and Biological Engineering (AIMBE). His research interests are in the area of medical image processing, with a special emphasis on pulmonary imaging. Dr. Reinhardt has received research support from the National Institutes of Health, National Science Foundation, the Roy J. Carver Charitable Trust, and the Whitaker Foundation. Dr. Reinhardt, together with colleagues from The University of Iowa, founded VIDA Diagnostics, an Iowa-based medical imaging software company that focuses on computer-aided diagnosis and image-guided interventions for lung disease.

Prof. Dr. Robin Strand

Head of Division, Division of Visual Information and Interaction Uppsala University, Sweden Dr. Strand is the professor in Computerized Image Analysis, head of the Division of Visual Information and Interaction, Dept. of Information Technology, Uppsala University. He obtained his PhD and Master degree from Uppsala University. His research interests are on image analysis, medical image processing, and digital geometry. Prof. Dr. Raphael Sznitman

Director ARTORG and Group Head University of Bern, ARTORG Center for Biomedical Engineering Research, Switzerland Raphael Sznitman received his B.Sc. in cognitive science from the University of British Columbia (Canada) in 2007. Following this, he studied computer science at Johns Hopkins University (USA) where he received his M.Sc and PhD. From 2011 to 2014, he was a postdoctoral fellow at the École Polytechnique Fédérale de Lausanne (Switzerland) in the Computer Vision Laboratory. Now an Assistant Professor at the ARTORG Center for Biomedical Engineering Research of the University of Bern (Switzerland), his research interests lie in the fields of computer vision and machine learning with applications to biomedical imaging, surgery and histology.

Abstracts (in chronological order)

Keynote Session 1

Title: From medical image computing to In silico trials of medical devices Keynote speaker: Alejandro F Frangi | University of Leeds, UK Abstract: Traditional medical product development life-cycle begins with pre-clinical development. In laboratories, bench/in-vitro experiments establish plausibility for treatment efficacy. Then in- vivo animal models with different species guide medical device efficacy/safety for humans. With success in both in-vitro/in-vivo studies, a scientist can propose clinical trials testing whether the product is made available for humans. Clinical trials often involve testing across many people, which is costly, lengthy, and sometimes implausible (e.g. paediatric patients, on rare diseases, small ethnic groups). When medical devices fail at later stages, financial losses can be catastrophic (high-risk pre-market approval (PMA) device costs can average to £74m of which £54m are spent in FDA-linked regulatory stages over an average of 4.5 years). Many reports have pointed to this broken/slow innovation system, and its impact on societal costs and suboptimal healthcare but radical changes to this innovation process are still to be developed. This talk introduces how computational imaging and computational modelling can deliver a paradigm shift in medical device innovation where quantitative sciences are exploited to carefully engineer device designs, explicitly optimise the clinical outcome, and thoroughly test side-effects before being marketed. In-silico clinical trials are essentially computer-based medical device trials performed on populations of virtual patients. They use computer models/simulations to conceive, develop and assess devices with the intended clinical outcome explicitly optimised from the outset (a-priori) instead of tested on humans (a-posteriori). This will include testing for potential risks to patients (side effects) exhaustively exploring in-silico for medical device failure modes and operational uncertainties before tested in live clinical trials. We will explore this topic, give examples and signpost areas of further research where the medical image computing community can make a considerable contribution in combination with other convergent technologies.

Title: Accelerating Deep Learning Medical Image Analysis in Radiology Keynote speaker: Leo Joskowicz | The Hebrew University of Jerusalem, Israel Abstract: Radiology, one of the cornerstones of modern healthcare, is undergoing rapid and profound changes due to the ever-increasing number of imaging examinations, the shortage of certified radiologists, the dynamics of healthcare economics, and the technological developments of artificial intelligence based image processing. This constellation has created unique opportunities for Computational Radiology, whose goal is to automatically extract meaningful radiomics features from medical images in support of clinical decision making. State-of-the-art methods for features extraction are based on deep learning classification algorithms that are starting to reach near human performance. However, developing deep learning methods requires large manually annotated datasets, which are seldom available and are expensive and time-consuming to create. This talk will present an overview of our new methods for the fast development of deep learning-based image processing solutions in Radiology with very few annotated datasets. The key idea is to bootstrap the creation of expert-validated annotations with new techniques for annotation uncertainty estimation and for learning how experts correct annotations generated by deep learning networks initially trained with very few annotated datasets. Our methods aim to optimize radiologist time, reduce the annotated dataset size, and increase the accuracy and robustness of the deep neural networks results. We expect that our methods will significantly lower the entry cost, shorten the time and reduce the effort currently required to develop and deploy deep learning based solutions for radiology.

Title: Artificial Intelligence in Bioimage Analysis Keynote speaker: Erik Meijering | University of New South Wales, Australia Abstract: To enable personalized medical healthcare, it is of key importance to understand the cellular and molecular mechanisms of life in health and disease. Advanced biomedical imaging technologies are having an enormous impact on research in this area, as they allow visualizing the structure and function of whole organisms, organs, tissues, cells, and even single molecules with very high sensitivity and specificity. They also facilitate the discovery of new biomarkers for early diagnosis and preclinical validation of novel treatments in tissue or animal models as a first step towards clinical implementation. However, biomedical imaging devices typically generate vast amounts of multiparametric spatiotemporal imaging data, containing much more relevant and subtle information than can be processed by humans, even if they are experts. Hence there is a growing need for computational methods to analyze these data automatically, not only to cope with the sheer volume of biomedical image data sets, but also to reach a higher level of accuracy, objectivity, and reproducibility. To this end we develop advanced computer vision methods for a wide range of problems, including restoration, enhancement, super- resolution, and registration of images, as well as detection, segmentation, quantification, classification, and tracking of objects in these images. And to cope with the complexity of these problems, we rely increasingly on machine learning approaches for this, in particular deep learning using artificial neural networks. In addition to developing new methods, we are strong proponents of evaluating and benchmarking methods thoroughly and making them publicly available in the form of user-friendly software tools. This talk will highlight methods we have been developing specifically for cell and particle tracking and motion analysis.

Keynote Session 2

Title: Large cohort analysis in medical image processing Keynote speaker: Robin Strand | Uppsala University, Sweden Abstract: The massive amount of medical image data being made available in both research and clinical work today is often too big to be parsed by human experts. Computer-aided tools have a great potential for detecting patterns in the medical image data, and to find relationships between image data and other medical data. The computer assisted methods often performs better than human experts, resulting in improved disease understanding. This talk will focus on two specific methods for large scale medical image data analysis developed and used in our group – (i) Imiomics, which enables statistical analyses of relations between whole body image data in large cohorts and other non-imaging data, at an unprecedented level of detail/spatial resolution, and (ii) aggregated saliency analysis, which describe which image regions on average had the highest impact on network predictions in regression analysis in large cohorts.

Title: Safe instrument detection during surgery Keynote speaker: Raphael Sznitman | University of Bern, Switzerland Abstract: Surgical scenes provide a highly challenging context for computer vision. While constrained in space, surgery is a highly dynamic environment with complex geometry, textureless surfaces, intermittent smoke and blood, as well as extreme changes in focus and blur. In this context, automatic detection of surgical instruments plays an important role in the role out of surgical robotic systems, but also in large scale analytics and educational platforms for surgery. Yet, detecting surgical instruments reliably remains an overwhelming challenge. In this talk, we will discuss recent works from our group in this domain. In the first, we introduce a deep learning approach that probabilistically models the scene of instruments to not only classify instruments present, but also estimate their positions. We then follow on to show how instance detection can be achieved by levering segmentation and clustering. Last, we discuss the need for safe- guard methods that prevent deep learning models deployed in the field from evaluating images they ought not too. To combat this problem we discuss a recent out-of-distribution methods we have designed, which projects images from an unlabeled training set into a low-dimensions space by optimising a network to maximise the likelihood of the training data. We show that our approach is highly effective at finding images that should not be evaluated by a subsequently training method.

Oral Session 1: Computer-Aided Detection/Diagnosis

Paper ID:7 Title: Promoting Cardiovascular Health Using a Recommendation System Presenter: Orlando Belo | University of Minho, Portugal Abstract: Lifestyle habits have a direct influence on people's health. Regular physical activity, combined with good nutrition, helps to prevent the early onset of diseases such as cardiovascular disease. In fact, a significant number of patients diagnosed with cardiovascular disease is associated with a poor diet and a sedentary routine. However, in today’s busy life, it is not always easy to find the motivation for adopting healthy lifestyle habits. Therefore, the existence of a system for recommending healthy meals and workouts can provide the necessary incentive for a healthier life. In this paper, we describe the implementation of a recommendation system following a case-based reasoning approach supported by specific relational databases and ontologies in the field of nutrition and physical activity. The system creates a plan for daily recommendations adapted to the preferences and restrictions of its users, and evaluates the outcome of the recommendations using indexes that quantify cardiovascular health. The success of the recommendations thus depends on a positive evolution of the index after the end of the proposed plan. This system thus presents a new perspective using case-based reasoning and ontologies to propose a diet and exercise plan, evaluating the success of the recommendations objectively through cardiac indexes.

Paper ID:11 Title: Information Technologies In Complex Reconstructive Maxillofacial Surgery Presenter: Mikhail Mikhailovich Novikov | Institute on Laser and Information Technologies of RAS, Russia Abstract: The data presented by the Ministry of Public Health of Russia over the past 10 years show that the incidence of the malignant tumours in the population has been increasing by 1,5% annually. Unfortunately, more than 62% of oral cavity tumours were only revealed at the III and IV stages of disease. In these cases, surgical treatment is of critical importance. The operations performed at these stages result in significant defects of the maxillofacial region, their correction being an extremely complicated task. The utilization of the microvascular grafts enables the surgeon to close these defects to a great extent. The growing requirements to the patient’s life quality in the postoperative period makes the surgeon search for new instruments to enhance the precision of planning and performing of the operations. The development of informational diagnostic devices and methods of high-technology cure of patients enhance the potentialities of the new approaches in processing the patient’s data for planning the treatment with the use of the mod-ern information systems: computer simulation and additive technologies. The article describes the use of information technologies for the preparation and planning of complex maxillofacial reconstructions. The use of 3D medical images (computed tomography and magnetic resonance imaging), computer-aided design and additive technologies allows you to create detailed anatomical computer models and their physical prototypes. The surgeon can use these models to plan treatment, custom design and manufacture implants, and evaluate outcomes.

Paper ID:16 Title: Machine Learning-based Imaging in Connected Vehicles Environment Presenter: Sayon Karmakar | University of Arkansas at Little Rock, USA Abstract: Intelligent algorithms greatly influence imaging. Machine learning techniques find applicability in correcting and highlighting medical images generated by X-rays, Computed Tomography (CT) scan, Positron Emission Tomography (PET) scan, Magnetic Resonance Imaging (MRI). Such techniques increase the reliability and quality of diagnosis to aid the doctors in devising an effective treatment. These systems have found wide applicability under clinical settings. Imaging is also an essential task in autonomous vehicle development and Connected Vehicles. Re-search on Connected Vehicles is evolving at a staggering rate with an objective to reduce road accidents significantly and replace drivers with fully autonomous self-driving vehicles. Driver monitoring system (DMS) is a new area of research where drivers are monitored using cameras and other medical sensor networks to detect the drivers’ medical state, mental state as well as cognitive state. Objective biomarkers allow such systems to predict these states. Imaging plays an essential role in the diagnosis aided by the state of the art machine learning algorithms. This paper addresses the challenges posed by imaging under driving environments for diagnosis of medical and cognition of drivers.

Paper ID: 43 Title: Predicting Neurostimulation Responsiveness with Dynamic Brain Network Measures Presenter: Jinwei Lang | Hefei Institutes of Physical Science, Chinese Academy of Sciences, China Abstract: Transcranial direct current stimulation (tDCS) shows great promise in enhancing neurocognitive abilities. However, the neurostimulation responsiveness varied hugely. Our previous work demonstrates that people receiving tDCS stimulation over Temporoparietal Junction (TPJ) fall into two heterogeneous groups: the positive responders who benefit and the negative responders who hurt from tDCS. The present study investigated whether dynamic brain network properties of resting-state fMRI could predict the pattern. We calculated each subsystem of the de-fault mode network's dynamic attributes using the multilayer community detection algorithm. Results indicated that the recruitment indexes were significantly different in bilateral aMPFC, PCC, Rsp, and PHC regions between positive responders and negative responders. Our results also confirm the advantages of the dynamic network measures over the static network measures. The study provides a feasible protocol in establishing the pre-stimulation screening procedure using resting-state fMRI.

Paper ID: 46 Title: Data augmentation for breast cancer mass segmentation Presenter: Jailin Clément | GE Healthcare, France Abstract: In medical imaging, a major limitation of supervised Deep Neural Network is the need of large annotated datasets. Current data augmentation methods, though quite efficient to enhance the performance of deep learning networks, do not include complex transformations. This paper presents a realistic image transformation model mimicking multiple acquisitions obtained from the analysis of a mammography database composed of screening acquisitions with priors. Our transformation model results from the combination of a registration algorithm, an invariant meshing strategy and a reduced model describing motion and local intensity variation in paired images. The extracted data variability was then transferred trough data augmentation to a small database for the training of a deep learning-based segmentation algorithm. Significant improvements are observed compared to usual data augmentation techniques.

Oral Session 2: Automated Medical Image Analysis

Paper ID: 14 Title: Unsharp Masking with Local Adaptive Contrast Enhancement of Medical Images Presenter: Ivo Draganov | Technical University of Sofia, Bulgaria Abstract: In this paper we present a generalized algorithm for unsharp masking of medical images which takes as one of its inputs a high contrast image underwent local adaptive contrast enhancement. Selection of optimal values of the number of histogram bins, processing window size and intensity lower and upper limits in iterative manner is part of applying Contrast Limited Adaptive Histogram Equalization (CLAHE). Experimental results reveal higher quality of the output images both in terms of root mean square contrast and sharpness. Achieved quality, both visually and quantitatively, is compared to that from the Adaptive Histogram Equalization (AHE) algorithm, limited histogram stretching and ordinary histogram equalization which proves its applicability. The algorithm is considered ap-propriate for processing a number of types of images, such as CT, X-ray, etc.

Paper ID: 27 Title: A fully automated end-to-end process for fluorescence microscopy images of yeast cells: From segmentation to detection and classification Presenter: Asmaa Haja | University of Groningen, Netherlands Abstract: In recent years, an enormous amount of uorescence microscopy images were collected in high- throughput lab settings. Analyzing and extracting relevant information from all images in a short time is almost impossible. Detecting tiny individual cell compartments is one of many challenges faced by biologists. This paper aims at solving this problem by building an end-to- end process that employs methods from the deep learning _eld to automatically segment, detect and classify cell compartments of uorescence microscopy images of yeast cells. With this intention we used Mask R-CNN to automatically segment and label a large amount of yeast cell data, and YOLOv4 to automatically detect and classify individual yeast cell compartments from these images. This fully automated end-to-end process is intended to be integrated into an interactive e-Science server in the PerICo1 project, which can be used by biologists with minimized human effort in training and operation to complete their various classification tasks. In addition, we evaluated the detection and classification performance of state-of-the-art YOLOv4 on data from the NOP1pr-GFP-SWAT yeast-cell data library. Experimental results show that by dividing original images into 4 quadrants YOLOv4 outputs good detection and classification results with an F1-score of 98% in terms of accuracy and speed, which is optimally suited for the native resolution of the microscope and current GPU memory sizes. Although the application domain is optical microscopy in yeast cells, the method is also applicable to multiple-cell images in medical applications.

Paper ID: 51 Title: Quantification of Epicardial Adipose Tissue in Low-Dose Computed Tomography Images Presenter: Goncharov Mikhail | Skolkovo Institute of Science and Technology, Russia Abstract: The total volume of Epicardial Adipose Tissue (EAT) is a well-known independent early marker of coronary heart disease. Though several deep learning methods were proposed for CT-based EAT volume estimation with promising results recently, automatic EAT quantification on screening Low-Dose CT (LDCT) has not been studied. We first systematically investigate a deep- learning-based approach for EAT quantification on challenging noisy LDCT images using a large dataset consisting of 493 LDCT and 154 CT studies from 569 subjects. Our results demonstrate that (1) 3D U-net precisely segment the pericardium interior region (Dice score 0:95 _ 0:00); (2) postprocessing based on narrow 1-mm Gaussian filter does not require adjustments of EAT Hounsfield interval and leads to accurate estimation of EAT volume (Pearson's R 0:96_0:01) comparing to CT-based manual EAT assessment for the same subjects.

Paper ID: 33 Title: A new content based image retrieval system for SARS-CoV-2 computer-aided diagnosis Presenter: Marcelo Mendoza | Universidad Técnica Federico Santa María, Chile Abstract: Medical images are an essential input for the timely diagnosis of pathologies. De-spite its wide use in the area, searching for images that can reveal valuable information to support decision- making is difficult and expensive. However, the possibilities that open when making large repositories of images available for search by content are unsuspected. We designed a content- based image retrieval system for medical imaging, which reduces the gap between access to information and the availability of useful repositories to meet these needs. The system operates on the principle of query-by-example, in which users provide medical images, and the system displays a set of related images. Unlike metadata match-driven search-es, our system drives content-based search. This allows the system to conduct searches on repositories of medical images that do not necessarily have complete and curated metadata. We explore our system's feasibility in computational tomography (CT) slices for SARS-CoV-2 infection (COVID-19), showing that our proposal obtains promising results, advantageously comparing it with other search methods.

Paper ID: 40 Title: Geometrically Matched Multi-source Microscopic Image Synthesis Using Bidirectional Adversarial Networks Presenter: Dali Wang | University of Tennessee, USA Abstract: Microscopic images from multiple modalities can produce plentiful experimental information. In practice, biological or physical constraints under a given observation period may prevent researchers from acquiring enough microscopic scanning. Recent studies demonstrate that image synthesis is one of the popular approaches to release such constraints. Nonetheless, most existing synthesis approaches only translate images from the source domain to the target domain without solid geometric associations. To embrace this challenge, we propose an innovative model architecture, BANIS, to synthesize diversified microscopic images from multi- source domains with distinct geometric features. The experimental outcomes indicate that BANIS successfully synthesizes favorable image pairs on C. elegans microscopy embryonic images. To the best of our knowledge, BANIS is the first application to synthesize microscopic images that associate distinct spatial geometric features from multi-source domains.

Paper ID: 20 Title: Covid-19 Chest CT Scan Image Classification Using LCKSVD and Frozen Sparse Coding Presenter: Kaveen Liyanage | Montana State University, USA Abstract: The coronavirus disease 2019 (COVID-19) is a fast transmitting virus spreading throughout the world and causing a pandemic. Early detection of the disease is crucial in preventing the rapid propagation of the virus. Although Computed Tomography (CT) technology is not considered to be a reliable first-line diagnostic tool, it does have the potential to detect the disease. While several high performing deep learning networks have been proposed for the automated detection of the virus using CT images, deep networks lack the explainability, clearness, and simplicity of other machine learning methods. Sparse representation is an effective tool in image processing tasks with an efficient algorithm for implementation. In addition, the output sparse domain can be easily mapped to the original input signal domain, thus the features provide information about the signal in the original domain. This work utilizes two sparse coding algorithms, frozen dictionary learning, and label-consistent k-means singular value decomposition (LC-KSVD), to help classify Covid-19 CT lung images. A framework for image sparse coding, dictionary learning, and classifier learning is proposed and an accuracy of 89% is achieved on the cleaned CC-CCII CT lung image dataset.

Oral Session 3: Medical Image Segmentation, Registration and Reconstruction

Paper ID: 36 Title: The Art-of-Hyper-Parameter Optimization with Desirable Feature Selection Presenter: Priynka Sharma | University of the South Pacific, Fiji Abstract: The development of cyber-attacks carried out with ransomware has become increasingly refined in practically all systems. Attacks with pioneering ransomware have the best complexities, which makes them considerably harder to identify. The radical ransomware can obfuscate much of these traces through mechanisms, such as metamorphic engines. Therefore, predictions and detection of malware have become a substantial test for ransomware analysis. Numerous Machine Learning (ML) algorithm exists; considering each algorithm's Hyper- parameter (HP) just as feature selection strategies, there exist a huge number of potential options. This way, we deliberate more about the issue of simultaneously choosing a learning algorithm and setting its HPs, going past work that tends to address the issues in isolation. We show this issue determined by a completely automated approach, utilizing ongoing developments in ML optimizations. We also show that modifying the information preprocessing brings about more significant progress towards better classification recalls.

Paper ID: 4 Title: A Dual supervision guided attentional network for multimodal MR brain tumor segmentation Presenter: Tongxue Zhou | Université de Rouen Normastic, France Abstract: Early diagnosis and treatment of brain tumor is critical for the recovery of the patients. However, it is challenged by the various brain anatomy structure, low im-age contrast and fuzzy contour. In this paper, we present a dual supervision guided attentional network for multimodal brain tumor segmentation. The backbone is a multi-encoder based U-Net. The multiple independent encoders are used to obtain individual feature representation from each modality. A dual attention fusion block is proposed to extract the most informative feature representation from different modalities. It consists of a spatial attention module and a modality attention module. Since the same brain tumor regions can be observed in the different modalities, therefore, the spatial feature representations from different modalities can provide the complementary feature representations for segmentation. To this end, a spatial attention based supervision is introduced to enable hierarchical learning of the multi-scale feature representations, and also to provide addition constraint for the segmentation decoder. In addition, an image reconstruction based another supervision is integrated to the network to regularize the encoders. The ablation experiments and the visualization results evaluated on BraTS 2019 dataset prove that the proposed method can achieve promising results.

Paper ID: 10 Title: Three-dimensional image reconstruction of murine heart using image processing Presenter: Haowei Zhong | South China Agricultural University, China Abstract: The key role of three-dimensional reconstructions in the analyses of medical imagery has gained more recognition over the past 20 years through many fields such as computer graphics and biological medicine. Specifically, lighting the role of isolated discrete mammalian cardiac tissues or organs typically involves a more accurate anatomical reconstruction procedure. To date, how-ever, there has been no unified approach that could be extended to model establishments. This article seeks to amend these problems by introducing a new approach for studying the three-dimensional distribution of Pnmt+ cell-derived cells in isolated mouse hearts. Related data comes from Scientific Data that describes a new cardiomyocyte population which is a specific class of phenylethanolamine n-methyltransferase (Pnmt+) cell-derived cardiomyocytes (PdCMs). Rigid registration was implemented to match the raw sliced images of the murine heart using TrakEM2. Compared to previous reconstruction approaches, our methods have accomplished automated 3D reconstruction using image processing. The primary purpose of this paper is to propose an automatic image processing pipeline to recreate the 3D image of the murine heart, which prevents cell distribution distortion induced by handcrafted noise removal. The final 3D reconstructed exhibition was displayed by Paraview.

Paper ID: 34 Title: Glioblastoma Multiforme Patient Survival Prediction Presenter: Snehal Rajput RamAchal Singh | Pandit Deendayal Petroleum University, India Abstract: Glioblastoma Multiforme is a very aggressive type of brain tumor. Due to spatial and temporal intra-tissue inhomogeneity, location and the extent of the cancer tissue, it is difficult to detect and dissect the tumor regions. In this paper, we propose survival prognosis models using four regressors operating on handcrafted image-based and radiomics features. We hypothesize that the radiomics shape features have the highest correlation with survival prediction. The proposed approaches were assessed on the Brain Tumor Segmentation (BraTS-2020) challenge dataset. The highest accuracy of image features with random forest regressor approach was 51.5% for the training and 51.7% for the validation dataset. The gradient boosting regressor with shape features gave an accuracy of 91.5% and 62.1% on training and validation datasets respectively. It is better than the BraTS 2020 survival prediction challenge winners on the training and validation datasets. Our work shows that handcrafted features exhibit a strong correlation with survival prediction. The consensus based regressor with gradient boosting and radiomics shape features is the best combination for survival prediction. Paper ID: 50 Title: Color-based Fusion of MRI Modalities for Brain Tumor Segmentation Presenter: Nachwa Aboubakr | University Grenoble Alpes, France Abstract: Most attempts to provide automatic techniques to detect and locate suspected tumors in Magnetic Resonance images (MRI) concentrate on a single MRI modality. Radiologists typically use multiple MRI modalities for such tasks. In this paper, we report on experiments for automatic detection and segmentation of tumors in which multiple MRI modalities are encoded using classical color encodings. We investigate the use of 2D convolutional networks using a classic U-Net architecture. Slice-by-slice MRI analysis for tumor detection is challenging because this task requires contextual information from 3D tissue structures. However, 3D convolutional networks are prohibitively expensive to train. To overcome this challenge, we extract a set of 2D images by projecting the 3D volume of MRI with maximum contrast. Multiple MRI modalities are then combined as independent colors to provide a color-encoded 2D image. We show experimentally that this led to better performance than slice-by-slice training while limiting the number of trainable parameters and the requirement for training data to a reasonable limit.

Oral Session 4: Machine learning and Deep learning

Paper ID: 12 Title: 2Be3-Net : Combining 2D and 3D convolutional neural networks for 3D PET scans predictions Presenter: Ronan Thomas | EURA NOVA FRANCE, France Abstract: Radiomics - high-dimensional features extracted from clinical images - is the main approach used to develop predictive models based on 3D Positron Emission Tomography (PET) scans of patients suffering from cancer. Radiomics extraction relies on an accurate segmentation of the tumoral region, which is a time consuming task subject to inter-observer variability. On the other hand, data driven approaches such as deep convolutional neural networks (CNN) struggle to achieve great performances on PET images due to the absence of available large PET datasets combined to the size of 3D networks. In this paper, we assemble several public datasets to create a PET dataset large of 2800 scans and propose a deep learning architecture named “2Be3-Net” associating a 2D feature extractor to a 3D CNN predictor. First, we take ad-vantage of a 2D pre-trained model to extract feature maps out of 2D PET slices. Then we apply a 3D CNN on top of the concatenation of the previously extracted feature maps to compute patient- wise predictions. Experiments suggest that 2Be3-Net has an improved ability to exploit spatial information compared to 2D or 3D-only CNN solutions. We also evaluate our network on the prediction of clinical outcomes of head-and-neck cancer. The proposed pipeline outperforms PET radiomics approaches on the prediction of loco-regional recurrences and overall survival. Innovative deep learning architectures combining a pre-trained network with a 3D CNN could therefore be a great alternative to traditional CNN and radiomics approaches while empowering small and medium sized datasets.

Paper ID: 21 Title: A Hybrid Deep Model for Brain Tumor Classification Presenter: Hamail Ayaz | Institue of Technology Sligo, Ireland Abstract: Classification of brain tumors from Magnetic Resonance Images (MRIs) using Computer-Aided Diagnosis (CAD) has faced some major challenges. Diagnosis of brain tumors such as glioma, meningioma, and pituitary mostly rely on manual evaluation by neuro-radiologists and is prone to human error and subjectivity. In recent years, Machine Learning (ML) techniques have been used to improve the accuracy of tumor diagnosis with the expense of intensive pre-processing and computational cost. Therefore, this work proposed a hybrid Convolutional Neural Network (CNN) (i.e., AlexNet followed by SqueezeNet) to extract quality tumor biomarkers for better performance of the CAD system using brain tumor MRI’s. The features extracted using AlexNet and SqueezeNet are fused to preserve the most important biomarkers in a computationally efficient manner. A total of 3064 brain tumors (708 Meningioma, 1426 Glioma, and 930 Pituitaries) MRIs have been experimented. The proposed model is evaluated using several well- known metrics, i.e., Overall accuracy (94%), Precision (92%), Recall (95%), and F1 score (93%) and outperformed many state of the art hybrid methods.

Paper ID: 28 Title: A Systematic Literature Review of Machine Learning Applications for Community- Acquired Pneumonia Presenter: Daniel Lozano-Rojas | University of Leicester, UK Abstract: Community acquired pneumonia (CAP) is an acute respiratory disease with a high mortality rate. CAP management follows clinical and radiological diagnosis, severity evaluation and standardised treatment protocols. Although established in practice, protocols are labour intensive, time-critical and can be error prone, as their effectiveness depends on clinical expertise. Thus, an approach for capturing clinical expertise in a more analytical way is desirable both in terms of cost, expediency, and patient outcome. This paper presents a systematic literature review of Machine Learning (ML) applied to CAP. A search of three scholarly international databases revealed 23 relevant peer reviewed studies, that were categorised and evaluated relative to clinical output. Results show interest in the application of ML to CAP, particularly in image processing for diagnosis, and an opportunity for further investigation in the application of ML; both for patient outcome prediction and treatment allocation. We conclude our review by identifying potential areas for future research in applying ML to improve CAP management. This research was co-funded by the NIHR Leicester Biomedical Research Centre and the University of Leicester.

Paper ID: 31 Title: Photograph to X-ray image translation for anatomical mouse mapping in preclinical nuclear molecular imaging Presenter: Dimitris Glotsos | University of West Attica, Greece Abstract: We present preliminary results of an off-the-shelf approach for the translation of a photographic mouse image to an X-ray scan for anatomical mouse mapping, but not for diagnosis, in functional 2D molecular imaging techniques, such radionuclide and optical imaging. It is well known that preclinical molecular imaging accelerates the drug development process. However, commercial imaging systems have high purchase cost, require high service contracts, special facilities and trained staff. As an alternative, planar molecular imaging systems provide several advantages including lower complexity and decreased cost among others, making them affordable to small and medium sized groups which work in the field, bridging the gap between biodistributions studies and 3D imaging systems. A pix2pix network was trained to predict a realistic X-ray mouse image from a photographic one (simplifying the hardware and cost requirement compared to standard X-rays), giving the potential to have an anatomical map of the mouse, along with the functional information of a molecular planar imaging modality.

Paper ID: 32 Title: Active strain-statistical models for reconstructing multidimensional images of lung tissue lesions Presenter: Ekaterina Guryanova | Russian Technological University (MIREA), Russia Abstract: Coupling augmented reality data with data from previous medical studies is most useful for surgeries on organs with little movement and deformation (e.g., skull, brain, and pancreas), as there is an opportunity to more clearly define the edges of the organ. The proposed coupling methods can be used in other operations. Be-sides, organ imaging techniques can compensate for the lack of tactile feedback during laparoscopic surgery by providing the surgeon with visual cues, improving hand-eye coordination, including robotic surgery. Using the combined image of MRI, CT-angiography, and ultrasound, individual adjustment of incisions and cutting planes, optimal positioning of paracentesis needles, and position display of the organ's main components are realized.

Paper ID: 44 Title: Dysplasia grading of colorectal polyps through convolutional neural network analysis of whole slide images Presenter: Daniele Perlo | University of Torino, Italy Abstract: Colorectal cancer is a leading cause of cancer death for both men and women. For this reason, histo-pathological characterization of colorectal polyps is the major instrument for the pathologist in order to infer the actual risk for cancer and to guide further follow-up. Colorectal polyps diagnosis includes the evaluation of the polyp type, and more importantly, the grade of dysplasia. This latter evaluation represents a critical step for the clinical follow-up. The proposed deep learning- based classification pipeline is based on state-of-the-art convolutional neural network, trained using proper countermeasures to tackle WSI high resolution and very imbalanced dataset. The experimental results show that one can successfully classify adenomas dysplasia grade with 70% accuracy, which is in line with the pathologists' concordance.

Paper ID: MD201 Title: Evaluating mobile tele-radiology performance for the task of analyzing lung lesions on CT images Presenter: OMER KAYA | Cukurova University, Turkey Abstract: The accurate detection of lung lesions as well as the precise measurement of their sizes on Computed Tomography (CT) images is known to be crucial for the response to therapy assessment of cancer patients. The goal of this study is to investigate the feasibility of using mobile tele-radiology for this task in order to im-prove efficiency in radiology. Lung CT Images were obtained from The Cancer Imaging Archive (TCIA). The Bland-Altman analysis method was used to com-pare and assess conventional radiology and mobile radiology based lesion size measurements. Percentage of correctly detected lesions at the right image locations was also recorded. Sizes of 183 lung lesions between 5 and 52 mm in CT images were measured by two experienced radiologists. Bland-Altman plots were drawn, and limits of agreements (LOA) were determined as 0.025 and 0.975 per-centiles (-1.00, 0.00), (-1.39, 0.00). For lesions of 10 mm and higher, these intervals were found to be much smaller than the decision interval (-30% and +20%) recommended by the RECIST 1.1 criteria. In average, observers accurately detected 98.2% of the total 271 lesions on the medical monitor, while they detected 92.8% of the nodules on the iPhone. In conclusion, mobile tele-radiology can be a feasible alternative for the accurate measurement of lung lesions on CT images. A higher resolution display technology such as iPad may be preferred in order to detect new small < 5mm lesions more accurately. Further studies are needed to confirm these results with more mobile technologies and types of lesions.

Paper ID: MD202 Title: Learning Transferable Features for Diagnosis of Breast Cancer from Histopathological Images Presenter: Maisun Mohamed Al Zorgani | University of Bradford, UK Abstract: Nowadays, there is no argument that deep learning algorithms provide impressive results in many medical image analysis applications. However, data scarcity problem and its consequences are challenges in deep learning implementation for the digital histopathology domain. The architecture of deep learning has a significant role in the choice of the optimal learning transferable features to adopt for classifying the cancerous histopathological image. In this study, we have investigated three pre-trained convolutional neural networks (CNNs) on ImageNet dataset; ResNet-50, DenseNet-201 and ShuffleNet models for classifying the BreaAst Cancer Histopathology (BACH) Challenge 2018 dataset. The extracted deep features from these three models were utilised to train two machine learning classifiers; namely, the K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) to classify the breast cancer grades. Four grades of breast cancer were presented in the BACH challenge dataset; these grades namely normal tissue, benign tumour, in-situ carcinoma and invasive carcinoma. The performance of the SVM and KNN classifiers are evaluated. Our experimental results show that the extracted o_-the-shelf features from DenseNet-201 model provide the best predictive accuracy using both SVM and KNN classifiers. They yield the image-wise classification accuracy of 93.75% and 88.75% for SVM and KNN classifiers, respectively. These results indicate the high robustness of our proposed framework.

Paper ID: MD205 Title: Deep YOLO-Based Detection of Breast Cancer Mitotic-Cells in Histopathological Images Presenter: Maisun Mohamed Al Zorgani | University of Bradford, UK Abstract: Coinciding with advances in whole-slide imaging scanners, it is become essential to automate the conventional image-processing techniques to assist pathologists with some tasks such as mitotic-cells detection. In histopathological images analysing, the mitotic-cells counting is a significant biomarker in the prognosis of the breast cancer grade and its aggressiveness. However, counting task of mitotic-cells is tire- some, tedious and time-consuming due to difficulty distinguishing be- tween mitotic cells and normal cells. To tackle this challenge, several deep learning-based approaches of Computer-Aided Diagnosis (CAD) have been lately advanced to perform counting task of mitotic-cells in the histopathological images. Such CAD systems achieve outstanding performance, hence histopathologists can utilise them as a second-opinion system. However, improvement of CAD systems is an important with the progress of deep learning networks architectures. In this work, we investigate deep YOLO (You Only Look Once) v2 network for mitotic-cells detection on ICPR (International Conference on Pattern Recognition) 2012 dataset of breast cancer histopathology. The obtained results showed that proposed architecture achieves good result of 0.839 F1-measure.

Keynote Session 3

Title: Future of AI-assisted endoscopic procedure Keynote speaker: Kensaku Mori | Nagoya University, Japan Abstract: This talk will give some perspectives on the future of endoscopic procedures augmented by artificial intelligence systems. Artificial intelligence (AI) is attracting much attention from various fields. A lot of AI-based applications are introduced in many media. Medicine is one of the fields that has started to benefit from AI. These fields include some computer-aided diagnosis or computer-assisted intervention. Automated organ recognition and lesion detection are typical examples of AI systems. The endoscopic procedure is one of the medical procedures familiar with AI since endoscopic procedures are performed based on videos captured by endoscopic cameras. Surgical or examination scene recognition will help medical doctors to perform safe and accurate surgery or examination. Our research group has recently developed an AI-assisted colonoscopy system that detects colonic polyps and classifies them into pathological types in real-time. In such a system, physicians will interact with AI computer and make final decisions. A robot will automatically determine their viewing positions based on AI-based scene recognition results in semi-autonomous laparoscopy. Pre-operative information, including pre- operative CT images, is analyzed to make surgical navigation information based on AI. AI has great potential to change endoscopic procedures in the near future. We would like to discuss such futures based on current our developments.

Keynote Session 4

Title: Lung Imaging and Machine Learning for Chronic Obstructive Pulmonary Disease Keynote speaker: Joseph M. Reinhardt | The University of Iowa, USA Abstract: Chronic obstructive pulmonary disease (COPD) is the third leading cause of death in the U.S. and is a serious health problem worldwide. COPD is a complex lung disease characterized by permanent airflow obstruction. COPD is often caused by smoking, but it can also occur due to environmental exposure or genetic factors. Computed tomography (CT) imaging can describe the spatial distribution of the disease and measure the extent of emphysema and airway disease in COPD. In this talk, I will describe how image-based features derived from lung tissue texture patterns and biomechanical measurements computed using image registration can improve our understanding of the normal and diseased lung and provide diagnostic information to help detect, stage, and predict the progression over time of diseases such as COPD.