Ethically Speaking #3: Machine Vision and AI Impacts of Patient Care, with Mohammad Mahoor, Phd Transcript
Total Page:16
File Type:pdf, Size:1020Kb
Ethically Speaking #3: Machine Vision and AI Impacts of Patient Care, with Mohammad Mahoor, PhD Transcript Dylan Doyle Burke 0:12 Welcome to the Ethically Speaking Podcast. I am your host, Dylan Doyle Burke. Ethically Speaking is a podcast dedicated to asking the fundamental questions of what it means to be ethical in a rapidly evolving world. This episode is part of a series focused on interrogating the ethical questions of artificial intelligence and machine learning. Ethically Speaking, is sponsored by the Institute for Enterprise Ethics housed at the Daniels College of Business at the University of Denver. The Institute's mission is to serve the public good by integrating ethics into professional and corporate cultures. In today's episode, we are delighted to be in conversation with Dr. Mohammed Mahoor. Dr. Mahoor is a professor in the Department of Electrical and Computer Engineering at the University of Denver, and the Director of the Computer Vision and Social Robotics Laboratory. Dr. Mahoor received his PhD from the University of Miami, and his research focuses on visual pattern recognition, social robot design, and bioengineering. Dr. Mahoor is an expert on computer vision and pattern recognition, machine learning and deep learning, algorithm development, effective computing (including facial recognition), socially assistive robotics, social behavior analysis, and so, so much more. By the end of this episode, we hope that folks listening will have a better sense of how machine vision and artificial intelligence robotics can help patients who may have Alzheimer's, dementia, or other neurological concerns. We also hope that folks at the end of this episode have a better sense of the ethical themes surrounding machine vision as it continues to grow and develop out in industry and out in our world. I'm here with Dr. Mohammed Mahoor. Welcome to the show, Dr. Mahoor. Dr. Mohammad Mahoor 2:34 Thank you. Good morning, and thanks for having me. Dylan Doyle Burke 2:37 Absolutely. So, as we get started, could you tell us a little bit more about yourself and how you got into this field of artificial intelligence and machine learning? Dr. Mohammad Mahoor 2:44 Yes, so I've been in faculty at the University of Denver in the Department of Electrical and Computer Engineering for about 12 years now. And my research is about computer vision and machine learning nowadays. They call it AI. Although, you know, AI is a general buzzword term nowadays, but specifically my research is about effective computing, emotion recognition and then using emotion recognition technology in social robotics. So, the idea is to humanize, you know, machines and robots, right? So [to] use social robots to assist and help people with different disabilities such as children with autism, older adults with depression, social isolation, Alzheimer's disease and dementia. So, I would say, you know, effective computing, and social robotics, right now. Dylan Doyle Burke 3:43 And we'll get more into the specifics of your research. But how did you how do you find yourself working on this? Was there like a moment where you decided "this is what I want to focus on?" Dr. Mohammad Mahoor 3:52 Yes. So, I mean, that when I was a graduate student, I was very interested in image processing. And then I got really interested in face recognition. That was my PhD all about biometrics and facial face recognition, face identification. And then I worked as a postdoc in the Psychology Department. So, I learned from psychology [about] right modified emotion. And then the question that you're working on back then, what the research actually was all about, [was] using automated technology or computer algorithms to analyze people's eye gaze, attention, and facial expressions. And particularly, we wanted to see how emotion actually is developed in children over time. So, I worked with developmental psychologists, and my job as postdoc was to come up with a computer vision algorithm. And yeah, to automatically recognize facial expressions and also eye gaze attention. So that's how I got into the field of affective computing and emotion recognition. And then after I joined DU, then... Really, I enjoy what's going on here in the Engineering School at DU. And I saw a lot of robots here. And then, with the help of one of the faculty actually, be focused mostly on social robotics, or humanoid robots. That's the name that they use it in the past. And so, then we thought about applications. Since I worked as a postdoc in the Psychology Department, I'd worked with children with autism, then I thought that we can use technology to make a better life for children with autism. So that's the first research that I started on social robotics. And then after that, we thought about other applications of social robots. And then depression was another application that we worked on. And then we thought "okay, so maybe even people with cognitive impairment" such as older adults with Alzheimer's disease or dementia, that a social robot could help as well. Dylan Doyle Burke 6:24 Yeah, I am curious about the "how" of how machine vision can help and whether that's changed through time as the technology has changed. Dr. Mohammad Mahoor 6:37 Yeah, absolutely. So, machine vision or computer vision... We call it mostly computer vision because we work with computers on just the specific machines to be able to run those algorithms. So, the idea is to basically... I mean, that you use a camera to be able to see the world, and then [use] true AI and machine learning to be able to do object recognition, body tracking, even medical image analysis, image understanding and so on, so forth. So, in, I mean that in early days mostly, computer vision was all about image processing maybe 30 years ago. But then other problems actually -- I mean, AI researchers worked on other problems such as motion, tracking motion estimation, pose estimation, and then object detection, face recognition, face detection, and so on, so forth -- But nowadays with the improvement in machine learning AI and specifically deep learning and convolutional neural networks, you're able to build systems that are end to end; they are more powerful, they can basically model in that huge amount of data, they have more capacity in terms of machine learning. Dylan Doyle Burke 7:58 Can you give an example of something that's possible now with deep learning and neural networks, that wouldn't have been possible? Dr. Mohammad Mahoor 8:06 Yeah, so deep learning, I mean, it's something that we had in the 80s, but we didn't have enough data and also, we didn't have enough computers. You know, powerful computers to be able to train models. And of course, there are advancements in deep learning techniques and neural networks. So, one of the things that [maybe] 10 years ago or 15 years ago, we could do it but not a good accuracy was object detection and object recognition. But using deep learning, we can train a model using millions of images, and samples. Using, for example, imagenet data set that contains millions of images. So, then we can do object recognition with a very high accuracy, very close or even better than human accuracy so we can beat human accuracy. So, what does it mean, "object recognition?" It means that you have an image, and you look into that image, and then you want it to know what objects are in the image. To, for example, to label them saying that if you take an image, a picture of my office, so we can say that those are all books, right? That's a computer. That's, you know, my backpack and so on and so forth. So that sort of thing. Dylan Doyle Burke 9:29 And so then how do you how do you apply that to say a child with autism? How would that help them out in the field? Dr. Mohammad Mahoor 9:36 Yes. So, in the case of autism, for example, it can be used to... I mean to practice social skills with them, to interact with them, and then to the interaction; we basically teach them eye gaze attention, turn taking, and emotional recognition, describing emotions and so on and so forth. So, we use it as a tool to help them and assist them. With... I mean, to practice social skills is not... So, the idea is not to supplant human [work], but it's basically to assist [the] therapies. And it's mostly because children with autism they like objects, robots and technology, and human interaction is very overwhelming for them. So, we take the advantage of technology to connect with them and help them. So, I have seen even some... I mean that people have created technology that are wearable devices and then children with autism can use them to give them suggestions. It's not been commercialized yet. I think this is still research, projects, and research tools, but I see a promise for them in the future to be used. Dylan Doyle Burke 11:00 So, in that rollout of when it does become commercialized, and it moves from academia to industry, how do we do that well? Dr. Mohammad Mahoor 11:13 So, you mean that where we are now in terms of how many companies have been commercializing these kinds of technologies? Or... Dylan Doyle Burke 11:22 And also, with a gaze towards ethics when we're talking about robotics, and then it goes out into the market, maybe to publicly traded companies and kind of those kinds of things? Dr.