A DEEP LEARNING ODYSSEY: HOW AI CAN HELP THE WORLD

Paul Brant Senior Tech. Ed. Consultant Dell EMC | Education Services [email protected] Table of Contents

Table of Contents ...... 2

Table of Figures ...... 3

Abstract ...... 5

Introduction ...... 6

The Cloud to Gateway ...... 7

What does HAL Think ...... 7

The Tail Wagging the Dog ...... 8

The Technological Singularity ...... 9

Will they take over our Jobs? ...... 14

Who are in the AI job cross hairs?...... 15

What is ? ...... 16

Why AI Today - The Perfect Storm ...... 18

Inexpensive parallel computation ...... 19

Big Data ...... 20

Better algorithms ...... 21

The Network Effect ...... 21

A deeper Look at the algorithm ...... 22

New software/algorithms ...... 24

How can computers be like us? ...... 27

They can be Creative ...... 28

They can Dream...... 28

They can experience the World ...... 29

Why is important to businesses ...... 30

Business Intelligence and Big Data ...... 30

Neural networks ...... 35

2016 EMC Proven Professional Knowledge Sharing 2

Perceptron’s ...... 36

What is Deep learning? ...... 37

Affective Computing ...... 38

New hardware ...... 38

Neuromorphic chips ...... 39

How will AI change the world? ...... 42

The Master Algorithm ...... 42

Where the jobs will truly be ...... 45

National security and AI ...... 46

What does the Future Hold? ...... 47

Summary ...... 47

Author’s Biography ...... 48

Appendix A – References ...... 49

Table of Figures

Figure 1 - The Cloud to Brain Gateway ...... 7

Figure 2 - The Countdown to the Singularity ...... 10 Figure 3 - History of Technology ...... 11 Figure 4 - The power of doubling ...... 12 Figure 5 - John Henry ...... 15 Figure 6 - Artificial Intelligence Taxonomy ...... 17 Figure 7 - Number of Startups over Time ...... 19 Figure 8 - AI Funding Investments over Time ...... 19 Figure 9 - GPU ...... 20 Figure 10 - The AI Network Effect ...... 21 Figure 11 - What is AI ...... 22

2016 EMC Proven Professional Knowledge Sharing 3

Figure 12 - From Monoliths to Microservices ...... 23 Figure 13 - What is a Data Scientist ...... 24 Figure 14 - Software Algorithm Taxonomy ...... 25 Figure 15 - What is Machine Learning ...... 26 Figure 16 - Van Gogh replication ...... 28 Figure 17 - AI systems can dream ...... 29 Figure 18 - AI systems can play ...... 30 Figure 19 - BI maturity model with new requirements ...... 33 Figure 20 - BI maturity model with Machine learning decision automation ...... 34 Figure 21 - The Numbers to find ...... 35 Figure 22 - The numbers to find ...... 36 Figure 23 - A Perceptron ...... 36 Figure 24 - Multiple Perceptrons ...... 37 Figure 25 - Architectural Differences ...... 39 Figure 26 - Traditional vs. Neurosynaptic ...... 41 Figure 27 - AI System Example ...... 42

Disclaimer: The views, processes or methodologies published in this article are those of the authors. They do not necessarily reflect Dell EMC’s views, processes or methodologies.

2016 EMC Proven Professional Knowledge Sharing 4

Abstract

Artificial Intelligence and Big Data is turning the world upside down. It is appealing to dismiss the notion of highly intelligent machines as mere science fiction. That would be a mistake. Big Data is making artificial intelligence (AI) real and extremely relevant. The potential benefits are massive. Think about it, human intellect, over the thousands of years has driven our world to greater heights. With AI and new computer learning techniques, the world can accelerate this eventual push to higher levels of intelligence. Everything that civilization has to offer is a product of human intelligence. We cannot predict what we might achieve when this intelligence is magnified by the tools that AI and Big Data provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

We will find out that there are no limits that cannot be achieved. The physics are there that show there are very few boundaries. Can this newfound intelligence be used in a negative way? Are there ethical implications on artificial intelligence use or misuse? Can artificial intelligence manipulate our thoughts, actions and approaches? Is this the “Terminator” scenario? Many have the understanding that this is possible.

Those are the downsides. However, there are many upsides. AI and Machine learning is the modern science of finding patterns and making predictions from data based on work in multivariate statistics, data mining, , and advanced/predictive analytics. Machine learning methods are particularly effective in situations where deep and predictive insights need to be uncovered from data sets that are large, diverse and fast changing — Big Data. Across these types of data, machine learning easily outperforms traditional methods on accuracy, scale, and speed.

This Knowledge Sharing article will explain and discuss the importance of AI and Big Data and how it is changing the world.

2016 EMC Proven Professional Knowledge Sharing 5

Introduction

This is a tale of two outcomes. There is no doubt that Artificial Intelligence (AI) and Big Data will change our lives like never before; the question is will it be for the better or the worse? Will the future of AI be akin to the HAL 9000 computer from the 1968 blockbuster movie “2001: A space Odyssey”, a disconnected and isolated machine vibrant by a captivating (yet potentially homicidal) humanlike cognizance. Alternatively, will AI and Big Data transform our lives forever and offer a rebirth to a new world through machines being a benevolent benefactor working as an un-plenteous partner to a better life?

This however is a given. Over the next 20 years, a robot revolution will transform our global economy in so many ways. Even Mark Zuckerberg of Facebook thinks so. His 2016 resolutioni is to own his own AI robot. “He wants his

very own Jarvis to help with his kids”, just like as in the Iron Man movies. AI is Big, very Big!

AI is very much part of both Cloud Services and Big Data and many think the incarnation of it will be in the form of what Amazon Web Services is doing today and is expected to do in the future. Going back a few decades, AI was supposed to be this big monolithic computer, akin to the HAL9000 computer. Now, using the cloud that is really inexpensive, always working or at least I hope, and built to stand the test of time. It will be ubiquitous and never show its face unless it wants to. The amazing thing is similar to how we think of utilities like water and power, AI will become another utility. I remember the movie “Forbidden Planet”, one of my favorite science fiction movies of all time. It is a story of a race called the “Krell” that went extinct because they created a machine that amplified their and allowed anything they thought of to actually happen. This brain amplification system had unlimited power and, as we all know, unlimited power leads to the potential of unlimited corruption. Do you think this story is science fiction? It’s happening today and you may not even know it. That little thing you carry around with you, the smart phone, is today’s incarnation of “your” brain amplifier! 2016 EMC Proven Professional Knowledge Sharing 6

There will be more on this later. As time moves on, AI will offer as much intelligence and, if not used properly, could bring down our civilization; certainly something to think about.

Inert objects, also known as the Internet of Things (IoT), will be brought to life the same as electricity did decades ago. The IoT will become cognitive. It is hopeful that AI will work hand in hand with us as an equal and supporting member by accelerating our sensors and building on what we already have. Again, AI will become a brain amplifier. Startups are everywhere taking “x” and adding AI to it. This is really huge, and now many believe, it is really here!

The Cloud to Brain Gateway

By 2030, we will have a brain gateway to the clouds.

Does this seem to be science fiction? , ii head of AI at , predicts this will happen and over the last few decades; 86 percent of his

predictions have come true, so he has a good record

of accomplishment. The reason for this brain gateway

to the clouds is because our brains have limited

physical capacity but the cloud is unlimited. When this gateway is created, as shown in Figure 1,

we will have saleable intelligence, search things at a

thought, download expertise and much more. This will Figure 1 - The Cloud to Brain Gateway happen based on the law of accelerated returns; more on all of this later.

What does HAL Think

Let us get it from the horse’s or should I say computer’s mouth. Back in the 1960’s, the HAL computer was the epitome of what was considered artificial intelligence. Let us watch this video as he explains what AI is and why we should not worry and enjoy the ride.

2016 EMC Proven Professional Knowledge Sharing 7

Please Launch Part 1 of the video attachment. To see it, run the part 1 of video in the PDF or the video file that is with this paper…

The Tail Wagging the Dog

Are AI computers more like us or are we more like AI computers? Many believe that AI will ultimately better human intelligence at all tasks. In terms of neuro science, there are ties with the stimulation of human senses and the activity of specific neurons; this is also true of muscle movement. There are correlations between stimuli to human senses and activity in human neurons. There is also a relationship between human neural activity and muscle action. Neurons activate when visual stimuli have specific shapes or move in specific ways. A particular type of neuron, called mirror neurons in monkey brains, triggers when a monkey rips paper or even hears paper rippingiii. When studying the ability of animals to learn, scientists have found detailed similarities between arrangements of neural activity and machine learning algorithms developed by programmersiv thus showing that animal brains learn by the same algorithms used for deep learning methods using AI systems.

This physical to mental brain function link reinforces the idea that technology at some point will create artificial brains similar to us. An outcome of this implies that by scaling the size and complexity of AI systems through for example, cloud services, the ever-increasing power of 2016 EMC Proven Professional Knowledge Sharing 8

CPU’s and the massive amount of data that most assuredly AI systems will eventually surpass

human intelligence. What makes us think this will really happen? The answer is the “Technological Singularity” and the “law of accelerating returns”.

The Technological Singularity

Will Information technology (IT) systems ever surpass what the best human brains can do today? Will IT systems be able to think, solve problems and be able to learn by themselves, much better than us? Is this science fiction? Many do not believe so. This theoretical point in time, in which IT systems can do an equivalent or better job than what our brains can do is called the technological singularity! The amazing thing is the way to get to this singularity is through the work technologists like us do every day. It will be accomplished through the of IT.

Let us start by considering the exponential growth of IT. What does that really mean? The answer lies in understanding how life has evolved over the years to get us to where we are today and how our creative minds in conjunction with information technology (IT) have worked and continue to labor hand in hand.

The Countdown to Singularity graph (below) shows an interesting phenomenon of how life has changed based on special events, also known as paradigm shifts. Paradigm shifts are changes in how we understand things and get things done over time. For example, in the top-left corner of the graph you see the start of Life. As you move down the line, life expands and develops from single-cell organisms to higher-order animals to humans walking upright and ultimately to living in cities, art, culture, technological advancements, etc., to the present day.

2016 EMC Proven Professional Knowledge Sharing 9

Figure 2 - The Countdown to the Singularity

This graph shows that from the very beginning, biology in conjunction with technology has driven how quickly paradigm shifts (next events) occur. Even though the sequence (from the upper-left to lower-right) of “next events” looks like a straight line or linear, it is a logarithmic graph. It shows “next events” occur ever quicker over time or are accelerated with each passing year. In other words, big changes in our life are happening perpetually faster as time moves forward and it seems there is no sign of it slowing down. It is for this reason that many people believe that very soon computers will become equal to humans in terms of intelligence. Some think as early as 2040, but many believe much sooner. The technological singularity concept might be new to many. However, for practitioners in IT I am sure many have heard of Moore’s Law, which predicts that the size and power of computer chips will double every one to two years, as shown on the graph, Exponential Growth of Computing in Figure 3.

2016 EMC Proven Professional Knowledge Sharing 10

Figure 3 - History of Technology

For the most part, it is a straight line showing growth of performance over time, but again, the graph is logarithmic, so performance is actually accelerating. This graph also shows that this accelerated growth does not just apply to computer chips (Moore’s Law predictions). Prior to computer chips there were punch cards, relays, vacuum tubes, transistors and then integrated circuits which are all paradigm shifts in their own right. They all experienced accelerated growth. Another interesting aspect about this graph is it shows that as technologies “Hit the wall” because they are no longer able to scale or grow (think transistors), an amazing occurrence happens where some new paradigm pops up that is faster, less expensive and continues the linear logarithmic growth curve.

This suggests that Moore’s Law is really part of a bigger picture called The Law of Accelerating Returns. What this law describes is that every year information technologies of all kinds double their power or usefulness! This means that price, performance, capacity and network bandwidth, to name a few, double every year as shown in Figure 4.

2016 EMC Proven Professional Knowledge Sharing 11

Figure 4 - The power of doubling

Again, Figure 3 shows that technological change is exponential. What that means is over time fifty or one hundred years of progress will actually be in the thousands of years at the current time. That is the reason why some people think that in a few decades, machine intelligence will surpass human intelligence and lead us to the technological singularity. The implications are many. For example, this might mean that we can mingle computer and human intelligence, If we can take out neurons and “download” them to a digital system we could potentially live forever or at least until the power stays on. We could achieve super high intelligence beyond any imagination. These predictions admittedly are a little scary, but there seems to be no limitation to human or should I say the Human-Machine imagination!

Engineers and scientists at Google are working very hard in this area. Ray Kurzweil who some say is today’s Thomas Edison, is a well-known scientist, writer, and proponent of The Law of Accelerating Returns as well as the singularity. He recently joined Google and his current projects are in line with his latest book, How to Create a Mind and how technology will advance to produce a cybernetic friend. Google, with Ray Kurzweil’s help has recently made major improvements to its speech-recognition software by using new techniques based on models inspired by biological neurons or mimicking how our brains work. (See Google Puts its Virtual Brain Technology to Work.) This in-progress work is focusing on natural language understanding so that computers will have the ability to understand the language that they are reading and the images that they are seeing.

2016 EMC Proven Professional Knowledge Sharing 12

What does this singularity mean to us in IT? It means that in order for the singularity to happen, science needs the stuff that Dell EMC and other technology companies are building or will be building.

For example, our brains are comprised of billions of connections or neural networks. Given the cost and limited resources of current IT infrastructures, building networks beyond a few million nodes is challenging. Some form of in-memory computing will be needed for computers to reach a scale large enough to imitate our brains. In-memory computing is an emerging paradigm, enabling user organizations to develop applications that run advanced queries on very large datasets or perform complex algorithms much faster and in a more scalable way than when using conventional architectures. This is achieved by storing application data in-memory (that is, in the computer's main memory), rather than on spinning disks, without compromising data availability, consistency and integrity. This is exactly what Dell EMC Flash Storage products, such as Dell EMC’s DSSD technology do. In-memory computing opens unprecedented and partially unexplored opportunities for business innovation (for example, via real-time analysis of big data in motion) and cost reduction (for example, through database or mainframe off-loading). However, until recently, only the most deep-pocketed and technologically-savvy organizations (in verticals like financial trading, telecommunications, military and defense, online entertainment and logistics) could afford the high costs and the complexities of adopting an in- memory computing approach. Looking forward, we can project that in-memory computing may also be a great fit for helping computers mimic what our brain does every day.

Many technologists believe that we have a long way to go to achieve this form of high-level cognitive thinking and self-learning. However, history has shown us time after time that what we thought would take many years to do, actually occurs much faster. For example, it took 15 years to sequence the HIV virus, but the SARS virus was sequenced in 31 days. The reason is the amount and speed of genetic data that is sequenced doubles every year and the cost of sequencing it has come down by 50% year over year in line with the law of accelerating returns.

Does this all sound like science fiction? Are futurists who are predicting the arrival of the technological singularity completely wrong? Will AI ever become more intelligent than we are? Only time will tell. However, as previously mentioned, history shows us that advances like this usually happen faster than anticipated or predicted! There is also a paradox to this story. If we are building these computers, how can we construct something that is more intelligent than we are?

2016 EMC Proven Professional Knowledge Sharing 13

The answer lies in Big Data and Algorithms. Will they take over our Jobs?

AI machines will cut the costs of doing business in every industry. Will they be taking over our jobs? In many cases, they will. Robots however are nothing new. They have been used in heavy industrial verticals such as automotive and aerospace just to name a few. What is new is according to a study by Bank of America (BOA)v, there is a fear that that they will accelerate social inequality by taking jobs from many industry and service verticals such as elderly care, preparation of fast food and transportation services. In addition to these manual jobs, the development of artificial intelligence means computers are increasingly able to “think”! Computers are now being able to perform analytical tasks once seen as requiring human judgment.

This transformation is all part of the fourth industrial revolution, after steam, mass production and computer electronics.

For those who may not have noticed, the rate of technological innovation has been going ballistic for many years, actually for thousands of years and you may not have noticed it. Many believe that we are now in the fourth industrial revolution. AI is hitting every industry sector, and is becoming part of our daily lives.

This revolution could leave up to 47% of all workers in the USA displaced by technology over the next 20 years, according to Oxford University research cited in the BOA report, with job losses mostly at the bottom of the income scale such as the aforementioned service jobs as well as the middle class.

The big internet companies get it. Google has been buying many robotic businesses the last few years. Some include Boston Dynamics, which makes the BigDog robot and DeepMind, which focuses in deep learning for artificial intelligence. It is also changing the tide in offshore manufacturing. The economics of this approach have traditionally saved up to 65% of labor costs, but with robotics, up to 90% can be achievedvi.

According to the article, on average, for ten thousand workers, there are about 66 computing robots in the whole world. However, in more industrialized nations like Japan, the ratio of 2016 EMC Proven Professional Knowledge Sharing 14 workers to robots is much higher. For the same number of workers (10,000) there are 1,520 computer system robots.

Hey, there are some aspects of human behavior that can be enhanced by AI counterparts. This article states that in certain situations, Judges are more equitable and fair after lunch with a full stomach than before. I am not implying that AI systems will preside at your court case, but AI might make things more fair.

This example shows an example of concerns that are ethical in nature. Today, the military is using and touting the use of unmanned drones in warfare and there is an up-swell of groups that are bringing to light negative use cases for AI such as the Campaign Against Sex Robots.

Fears about humans being replaced by machines are nothing new. The folk tale of “John Henry” (Figure 5) exemplifies the battle of Man

vs. Machine competitive paradigm.

It has been shown that over the past few hundred years and beyond,

Figure 5 - John societies have created ways of turning technological developments to Henry their advantage. Education is one approach to this ever-long competition, but AI may ultimately have the advantage.

Who are in the AI job cross hairs?

A wide range of jobs could eventually be taken over by machines, Bank of America Merrill Lynch’s analysts predictvii.

Repetitive tasks: For example, the article mentions that a company called Momentum Machines has designed a system to emulate human repetitive tasks, i.e. making hamburgers. Basically, any task that is repetitive such as adding extra’s to you burger, can be automated.

Manufacturing: In the 80’s and 90’s, off-shoring jobs were popular and for the most part, are still today, Even though this approach can cut labor costs by up to 60 percent, it has been shown that by using machines, the percentage of workers needed can be cut to ninety percent.

Financial Management: Even the financial sector is being touched. Even though financial advisors have been the stalwart of personal service with the human touch, with advanced AI

2016 EMC Proven Professional Knowledge Sharing 15

algorithms that can customize services based on personal needs, quite often, the results can be better than that human advisor. For example, a new company called Wallet, “builds machines to help consumers make smarter decisions about their money, especially when humans are out spending it like a drunken sailorviii”.

Doctors Some 570,000 “robo-surgery” operations were performed last year. Oncologists at the Memorial Sloan-Kettering Cancer Center in New York have used IBM’s supercomputer, which can read 1m textbooks in three seconds, to help them with diagnosis. Other medical applications of computer technology involve everything from microscopic cameras to “robotic controlled catheters”.

Care workers Merrill Lynch predicts that the global personal robot market, including so-called “care-bots”, could increase to $17bn over the next five years, “driven by rapidly ageing populations, a looming shortfall of care workers, and the need to enhance performance and assist rehabilitation of the elderly and disabled”.

What is Artificial Intelligence?

AI, simply put, is to have computers do what people do today and focus on the things that intelligent people do. We use AI all the time in our daily lives, but we often don’t realize it. Robots are not AI but are containers for AI. For example, Apples is a representation of AI, and there is no robot involved at all. Finally, AI is a broad concept so while there are many different types or forms of AI, the critical categories we need to think about are based on an AI’s various capabilities.

2016 EMC Proven Professional Knowledge Sharing 16

Figure 6 - Artificial Intelligence Taxonomy

As shown in Figure 6, there are three major AI categories:

1. Artificial Narrow Intelligence (ANI): This form of intelligence often referred to as Weak AI, Artificial Narrow Intelligence offers algorithms that specialize in one area. There is AI that can beat the world chess champion in chess, but that is the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it will look at you blankly. 2. Artificial General Intelligence (AGI): This form of intelligence often referred to as Strong AI, or Human-Level AI, refers to a computer that is as smart as a human is (at least in most aspects). Creating AGI is much harder to implement than producing ANI, and computer scientists are not there yet. It involves an overall mental capability that, among other things, involves the ability to reason, think abstractly, plan, solve problems, comprehend complex ideas, learn quickly, and learn from experience.

2016 EMC Proven Professional Knowledge Sharing 17

3. Artificial Super intelligence (ASI): This is an ability to do as good as or better than humans are in every way. This includes all aspects in intelligence and in every field including scientific skills, creativity, social and even wisdom which requires the knowledge and understanding of what happened in the past. This is also known as “Artificial Super intelligence”. This form ranges from a computer that is just a little smarter than a human to one that is millions or even billions of times more intelligent in all realms. ASI is the reason the topic of AI and the associated resources of big data are such a compelling and often disdainful topic where immortality and human extinction are often discussed.

As of this date of publication, computer scientists have conquered the lowest capability of ANI. In many ways, ANU is every place you look. The AI path is the approach from ANI, through AGI, and finally to ASI. Some question whether we will survive the last step.

Why AI Today - The Perfect Storm

Why is there so much hype today around artificial intelligence? There is most certainly interest. For example, the number of startups moving into this field is exploding and there is big money being pushed into it is as well and shown in Figure 7 and Figure 8.

For example, after the AI ice age of the 70’s through 90’s, Silicon valley is coming back strong with companies like Scaled Inference Inc. According to Bloomberg, sixteen AI companies have seen first round funding from 2010 to 2014.

In addition to the low of accelerated returns discussed in section titled “The Technological Singularity”, the answer is the perfect storm we are having today. There is a confluence of three major advancements in technologies. They are Big Data, Cloud Computing and Deep Learning. These three recent breakthroughs have unleashed the long-awaited arrival of artificial intelligence that can really be used.

2016 EMC Proven Professional Knowledge Sharing 18

Figure 7 - Number of Startups over Time

Figure 8 - AI Funding Investments over Time

Inexpensive parallel computation

The way humans think within our minds is a process that takes parallel paths as in the form of deep learning within our Neurons, in effect, work together to make things happen. To build a parallel network that the AI software runs on requires many different processes working together at the same time. For example, to recognize a picture or a human face requires the AI system to see every pixel, break it down to fundamental elements, and make sense about it. It then needs to bring it all together comparing what it has processed in the past and come to a conclusion as to what it is. Until recently, this has been very hard for computers to do. Computers are not thinking and making decisions in parallel.

2016 EMC Proven Professional Knowledge Sharing 19

So, what is new? What is trending today is not only new software, but also new hardware. The new hardware is called a GPU or Graphics Processing Unit as shown in Figure 9. This new hardware has been around a while taking the form of hardware for the video gamers out there. This use case is very useful for users who want to watch a virtual reality world in real time moving pixels around. This hardware has augmented the more traditional CPU of Figure 9 - GPU most PC and servers out there. After 15 or so years of Gaming, AI developers realized that this form of hardware would be perfect for parallel processing of other use cases such as deep learning. This paradigm shift of hardware created a new world or platform for AI developers. For example, a traditional CPU would take weeks to analyze a video or picture image that requires a 100 million-parameter deep learning network. A few GPUs could do the same thing in a few hours. That is why today, companies like Facebook can find your friends and relatives images in real time or allow Netflix to figure out what you want to watch for one hundred thousand users at the same time.

Big Data

Whether you are an artificial intelligence computer or a human, we all need to be taught. A human brain does this by remembering things it has seen in the past and tries to make comparisons or make sense of it. We categorize images. For example, we see things that have wheels and categorize it as something that moves like a car. However, it still needs time to see a lot of them, from Chevy’s to Ford’s, to get good at it. That is how we learn through repetition and seeing lots of them; this is even more important with artificial intelligent computers. It needs a large data set and in many cases, a lot larger than Humans can imagine.

As a result, part of the perfect storm is the fact that big data is coming through in the AI approach to learning. If you have a large enough data set, wonderful things can happen. From data lakes to structured and unstructured data pools, all this is making massive learning opportunities for AI systems. Every time you access the internet, you are being tracked. From web cookies to web searches, AI systems are taking this information and using it to suck into its database. Wikipedia and the entire digital world are teaching AI to be smart.

2016 EMC Proven Professional Knowledge Sharing 20

Better algorithms

OK, so we have the data and the hardware to make AI smart. The last and final piece of this perfect storm is better ways for software to manipulate the data. Deep learning has been around since the 1960’s. It took years to learn the best way to create algorithms in a way that is similar to how our brain works which takes large combinatorial links that would encompass what would be 20 to 100 million neurons. What they eventually found out is if you create layers and stack the decision making process, doing things like seeing a face in an image is not making computers able to distinguish individual people. For example, if you can ascertain what an eye is, you can pass it up to another level that can evaluate how many eyes there are and then untimely pass it up to an even higher level, which can then determine it is a face. This is the way deep learning algorithms work and it does this my optimizing the mathematical results from each abstraction layer so that the ability to learn faster is achieved as you move up the stack of layers. Then, when this software was ported to faster and better GPU hardware, it made the AI systems even more efficient.

The interesting thing is this is just the start. You need more than GPU’s, your own big data, and algorithms. The additional factor is you also need the cloud. Cloud-based AI will help us achieve economies of scale never dreamed of which will become an increasingly ingrained part of our everyday life.

The Network Effect

AI is doing well. The law of increasing returns, also known as the “Network Effect”, states that the power of a network expands faster as it grows bigger. Think of it as a snowball rolling downhill. The way it works and, by the way, this is the power of the Internet, is the bigger the network is, the more people access it. The more people who access it make it more useful given the combined knowledge being accumulated. This loop continues over time. This law also holds true for AI as shown in Figure 10. Creating smarter algorithms with more data driven by more users drives the AI network effect. This Figure 10 - The AI Network Effect infinite loop will continue un-abated. AI will get smarter but there are concerns because his loop might derive negative consequences. It could stifle competition and create just a few major entities that could create cloud-based super giants. 2016 EMC Proven Professional Knowledge Sharing 21

Some believe that this is already happening with companies like Amazon, Google, and Microsoft.

A deeper Look at the algorithm

Here is an example of the power of an algorithm; they can do amazing things. By analyzing the tone of your voice, software algorithms can predict whether your relationship will lastix. In addition, according to the article, it does a better job than your friendly matchmaker does. Therefore, what is so important about an algorithm? Well, the algorithm is the engine that makes AI happen. AI is the confluence of Big Data, Machine learning algorithms and Cloud Resources as shown in Figure 11.

Will artificial Intelligence (AI) be a single, magical supercomputer? Think HAL 9000 from 2001: A Space Odyssey, or the ship’s Big Data Algorithms computer on Star Trek. A few years have passed by since this film was created and they almost had it right. However, we are AI finding that how HAL is built today is slightly different. Even though I love the 2001 HAL derivation of what AI computers will be like, it Cloud is not exactly what happened. The future of Resources AI, at least the way we see it today, is not about one super giant massively intelligent box; it is about many small, dedicated and Figure 11 - What is AI highly focused services that understand its surroundings and yours. It is about contextual intelligence built on a Microservice. Similar to AI, the algorithms used are not that much different from 20 years ago and it is much easier to write and use them now. What then is the main driver? What we are finding out that is really pushing this new AI approach is inexpensive, commodity and ubiquitous computing. We also need a wealth of data, and data center platforms that bring everything together. That platform is cloud computing.

For AI to be useful and change our lives, it needs all three: a good algorithm, millions of relevant data points, and computing power to process it quickly so it can drive actions in real-time. Lose any one of these and it is not nearly as useful in the modern world.

2016 EMC Proven Professional Knowledge Sharing 22

The point is with these three parts, the way business creates value is changing. In effect, we all are becoming a digital business and AI is at the root of this disruption. So, rather than thinking a computer that is all-thinking and all-knowing with a monolithic approach to delivery, it’s going to be billions of compact parts all working together on limited and well defined tasks and they will all be grabbing massive amounts of data to get the job done. Monolithic architectures are becoming outdated. As shown in Figure 12, today and moving forward, distributed architectures, also known as microservices, will be how AI is delivered.

Algorithms are changing our lives in so many ways. For example, municipalities are starting to

Figure 12 - From Monoliths to Microservices autonomously change traffic patterns based on human usage. Changing weather or other factors that are real time might change how and what we do in our daily lives. Web pages are constantly updating them based on who is going there. Your view will be different from another user since your interests and need patterns are being learned by AI running on their web servers.

AI microservices are popping up all over the place. For example, software bundles such as open source tools like Apache Spark have libraries and run time modules that have all or a lot of what you need to get started in AI. Google has software called TensorFlow, which is at the heart of “”. This software is free and has been in the open source community for some time; the great thing about open source is not only is it free, but it is available for everybody.

One can easily see that with the surge of open source tools and utilities that can be used by any developer, the potential for astronomical growth in AI can explode as part of a crowd sourcing approach to allow massive innovation. Many people tout this as an explosion of a second application or algorithm economy with no end in sight.

2016 EMC Proven Professional Knowledge Sharing 23

AI needs Data Scientists

Today, with cloud computing and everything it has to offer like massive amounts of CPU power, really fast networks and petabytes of data capacity, there are still challenges in getting AI right. Some AI software is free and open and these tools make it so much easier to get started.

What is really missing are good data scientists. Data Scientists are people that usually have a strong background in computer science. What is also common is that they have deep knowledge in statistics and general mathematics. Really good ones also have a wide breath of understanding the business needs and challenges faced in the real world as shown in Figure 13. You can imagine how difficult it is to find people with this Figure 13 - What is a Data knowledge base. Scientist

There are many other hurdles to jump over and one is how the data that is being used is formatted. Today, with the “Internet of Things”, sensor data, social feeds and other data sources being unstructured in nature, it is difficult to figure out how to manage and use it. If your data center is still in the structured database world, it is going to be difficult to take the jump into the AI world. This is called moving from the 2nd platform of application development, also known as client / server, relational approaches and delivery to the 3rd platform, which is web sale microservices.

What makes a microservices approach to AI development and web scale app in general is it’s much better to have one process or microservice do a job really well verses having thousands of services that do it all and not really do it well. It is called keeping the eye on the prize and getting the algorithm to become an expert in what is does well.

To have the best AI system out there, you need the best and up-to-date infrastructure that is available to use billions of microservices in the private, public or hybrid cloud.

New software/algorithms

Artificial intelligence requires algorithms to reason, solve problems, have perception, understand languages and, most importantly, to learn. Below in Figure 14 - Software Algorithm Taxonomy is taxonomy of the plethora of approaches to machine learning.

2016 EMC Proven Professional Knowledge Sharing 24

Figure 14 - Software Algorithm Taxonomy

What is Machine Learning?

A machine learning system learns by experiences that are derived from training. From this training, an AI system makes simplifications based on its explanation to a number of use cases. It is then able to execute and perform resulting procedures that are based on actions that make sense as a result of events that are not expected. Machine learning uses big data analysis approaches such as predictive analysis and data mining.

Specific algorithms are used for this purpose, often outlined in taxonomies and use cases are based on the type of input required. Machine learning is also used to automate the consumption of knowledge bases used by expert systems. These systems emulate the decision-making and rule induction process of human expertise in a particular field and the general approach involves finding and manipulating regularities in training data.

Some methods of machine learning include using neural networks, genetic algorithms, rule induction, case-based learning, and analytic learning. During the development of AI, many of these approaches were used separately and independently. However, recently, these methods have been used together in more of a hybrid approach. This allowed for more effective models

2016 EMC Proven Professional Knowledge Sharing 25

used in AI development. The grouping of these various analytics can derive efficient, effective, and repeatable results. Let us try to sum it up. Machine learning requires the following three things:

1. Representation – The data used must have a way to have a classifier element represented in a formal language that a computer can handle and interpret. 2. Evaluation – This is done through a function or

system call to distinguish or evaluate the good and bad classifiers. 3. Optimization – This is done through defining a

method that is used to examine among the Figure 15 - What is Machine associated classifiers within the language or Learning data to find the highest scoring candidates within a training set. The key approach of artificial intelligence and machine learning is to use less complex examples of a training set so that the AI system can make more effective decisions through inference and summations that yield an ability to learn more quickly and accurately and optimize every aspect of the learning process. Table 1 lists common mathematical techniques, algorithms, and methodologies that can be appliedx.

Table 1 - The Three components of learning algorithms

Representation Evaluation Optimization

Instances Accuracy / Error rate Combinatorial

K-nearest neighbor Precision and recall Greedy Search

Support Vector Squared error Beam search Machines Likelihood Branch and bound Hyper planes Posterior probability Continuous optimization Naïve Bayes Information gain Unconstrained Logistic representation K-L divergence Gradient decent Decision Trees Cost / Utility Conjugate Gradient Sets of rules Margin Quasi-newton methods

2016 EMC Proven Professional Knowledge Sharing 26

Propositional rules Constrained

Logic programs Linear programming

Neural Networks Quadratic programming

Graphical models

Bayesian networks

Conditional random fields

Machine learning strategies

Machine learning can be done by applying specific learning strategies. These strategies include:

1. Supervised – This maps the data inputs and models them against wanted outputs.

2. Unsupervised – This approach maps the inputs and models them looking for new trends. 3. Derivatives – This approach combine the above two strategies offering a semi-

supervised approach.

These approaches provide many possible applications in which machine learning can be used to describe, prescribe, and discover what is going on within large big data volumes of diverse information.

How can computers be like us?

People are very good at solving tasks, being creative, and adapting to the world. AI is great at chess, but there is more than that. In the early days, people thought that solving certain tasks (such as chess) would lead us to discover human-level intelligence algorithms. However, the solution to each task turned out to be much less general than people were hoping (such as doing a search over a huge number of moves).

What if computers can be creative, dream and experience the world like we do? Deep learning is an AI approach that has been developing for many years. Rather than programmers building new algorithms for each problem, you create systems that can morph like we do based on information, or in this case, data that we feed them.

2016 EMC Proven Professional Knowledge Sharing 27

They can be Creative

For example, as shown in Figure 16, deep learning systems can create a Van Gogh in less than an hourxi.

Figure 16 - Van Gogh replication

An algorithm that mimics the styles of some of history's greatest painters has been developed that was used to create "Convolutional neural networks". By using imaging visualization systems, images can be cut into artistic pieces just like the great painters. What makes this unique is it allows neural networks to separate the images style from the content itself.

They can Dream

Yes, AI systems can dream as shown in Figure 17. Google created an AI system to use pattern recognition algorithm, which looks for arrangements creating dreamy images of animals, buildings and landscapes, which can be beautiful to a little scaryxii. But, then again, some of our dreams are like that where they can veer from beautiful to terrifying.

2016 EMC Proven Professional Knowledge Sharing 28

Figure 17 - AI systems can dream

For example, the neural network can detect the boundaries on an image and morph them appropriately making it look more like a painted picture. They can experience the world in many wonderful ways. We all know that we have had industrial robots for years.

They can experience the World

These industrial robots or systems are told exactly what to do by software developed by programmers and they really don’t learn. How do you get computers to learn the way children do? You do it by experiencing the world. You put blocks, hammers and other tools in front of them and have them play as shown in Figure 18, also known as learning.

2016 EMC Proven Professional Knowledge Sharing 29

Figure 18 - AI systems can play

For example, Berkeley’s researchers and Google’s DeepMind AI division created software that learned and mastered Atari games without instruction from humansxiii. The software and algorithm kind of plays around with it and figures it out. It even figures out new ways that humans never thought about. This is the amazing thing about artificial intelligence. Through the world we live in, it is learning not just the way we do things but also ways we never dreamed of.

Why is Machine Learning important to businesses

Machine learning is becoming an imperative for businesses, especially for analytics.

For example, especially the last 10 years, machine-learning technologies have been driving robot control, electronic computer vision systems, and much more. It is also Interesting how, in many fields, machine learning is escaping its containment from science labs to reach commercial and business applications.

For example, AI through machine learning can play a key role in systems that need to adapt to operational needs or work with extensive and complex data sets. Machine learning methods are playing a larger role in enterprise software applications, especially for those types of applications that need in-depth data analysis and adaptability. These areas include analytics, business intelligence, and Big Data.

Business Intelligence and Big Data

As it relates to business use cases of AI, the use of BI or Business Intelligence systems have been morphing in its creation and use for years. There are questions as to how accurate they

2016 EMC Proven Professional Knowledge Sharing 30

are in their ability to provide valid and accurate information to the workers in various business verticals. The adoption of AI within businesses has modified the role of BI systems and has evolved in that not only is it used at the strategic level, but it has moved to the tactical and operations level. AI is also helping BI systems deal with complex problems and situations. AI is enabling business to handle the increasingly large amount of data that can be costly and time consuming to analyze it. With AI, businesses can address much higher cognitive problems that deal with prediction, hypothetical scenarios and, in the future, learn to make accurate suggestions. This form of analysis is untouched today. This will drive an explosion in data analytics in ways we have never seen before.

Decision support systems and applications have progressed through the following stages:

1. Model Driven. This approach focuses on the processing and access of financial models that can be optimized through AI simulation models. These models provide the most efficient way to create simple quantitate models that can use their functionality efficiently. This model can often do its job with minimal data sets provided by management and decision makers in an actionable way to analyze a situation. As a result, large databases and, as mentioned previously, minimal data sets are required. 2. Data Driven. At a high level, data-driven approaches to AI highlight the ability to use and modify

a time-series approach of internal and possibly external corporate and also the possibility of real time data. Data warehouse systems can be used that drive the modification of data by automated tools and processes that are specific to a task. 3. Communications Driven. Communications-driven decision support systems leverage network and communications approaches that create an environment that can leverage and foster relevant collaboration and communication results. In this stage, communication technologies are a key architectural part of this decision-making approach. 4. Document Driven. This form of AI approach uses infrastructure, compute, network and storage

to leverage document analysis. Possible input sources can be unstructured data as well as structured data. In this AI use case, a search engine is the most focused tool in this form of analysis. This form of decision support system is typically considered a text or metadata DSS. 5. Knowledge Driven. This form of decision support system focuses on offering recommended actions to management. This form of process is very much human-machine interaction processed.

Along with disciplines like data mining, natural language processing, and others, machine learning is being seen in business as a tool of choice for transforming what used to be a 2016 EMC Proven Professional Knowledge Sharing 31

business intelligence application approach into a wider enterprise intelligence or analytics platform or ecosystem, which goes beyond the traditional scope of BI — focused on answering “what is going on with my business?” — to give all possible answers to “why are we doing what we’re doing?” and “how can we do it better?” and even “what should we do?”

As business models are becoming more complex and producing massive amounts of data to be handled with less and less latency or “time to result”, decision support and BI systems are required to grow in complexity and in their ability to handle ever-increasing volumes of data. This demand is boosting the growth of more sophisticated solutions to address specific business and industry problems; it is not enough to sift out a straightforward result, systems need to provide business guidance.

Some scenarios where machine learning is gaining increased popularity in the context of analytics and BI can be found in applications for risk analysis, marketing analytics, and advanced analytics for Big Data sources.

Machine Learning in the Context of BI and Big Data Analytics

One of the reasons why machine learning is becoming important to the business is frequently changing conditions require that algorithms be used to adjust to these changes. Examples of applications that can adapt are spam detection.

As the ever-encroaching deluge of data collection continues in volume, velocity, and variety (the three Vs of Big Data) and as the requirement of lower latency or time to analysis grows, new algorithm’s pop up to analyze these large data sets.

Let’s take a look at how machine learning and AI are being used within the business by looking at Figure 19 - BI maturity model with new requirements.

2016 EMC Proven Professional Knowledge Sharing 32

Figure 19 - BI maturity model with new requirements

The decision phase can happen in two ways. The first is to enable how decisions are made. Does the user make the decision or does the AI system? It’s a question of delegation. It’s a question of does the AI system learn and make recommendations or does the human make the recommendation and the AI system validates it? By allowing the AI system to make the recommended decision, the approach can lend itself to being able to use prediction-based analysis. This could lead to some form of messaging from the AI system to provide early warning of issues. Another use case is data discovery.

We can also see that, based on the table, there are other options that are available as shown in Figure 20.

2016 EMC Proven Professional Knowledge Sharing 33

Figure 20 - BI maturity model with Machine learning decision automation

Opportunities and Challenges

The question is why do you need AI within a business environment. From a perspective of how a business leverages big data and the analytics that are needed, using AI can drive value from the data sources in new and amazing ways. For example, let’s take a look at how AI can enhance decision management and drive business value using these approaches: 1. AI can drive current and future business analytics proficiencies such as data mining and resulting predictive analytics approaches that drive value through dealing with problem solving that are very complex. In many cases accuracy is the most important result of a big data analysis process. 2. By using AI, one can drive higher level of decisions processes by executing on flexible data discovery features such as distinguishing patterns, enabling more advanced search competences, enabling knowledge discovery by deriving correlations, and many other high level processes, 3. If one can detect and act on capabilities that can change due to varying data patterns that come out of traditional or new BI and analytics systems, business can be more agile by seeing sooner, rather than later, short term trends that may have been missed. This would add business value by changing direction quickly to beat the competitor.

2016 EMC Proven Professional Knowledge Sharing 34

4. One can go a step further but this approach would need to be watched closely. If the AI system can be set in a mode where it can create and follow through implementing autonomous decisions, at least at the beginning of a process, project or business initiative, this would enhance the decision process. This might be a worrisome approach but could yield positive results to optimize the decision process in cases where the application can decide by itself.

Open Source Leading the Way

Open Source projects and companies such as OPenAIxiv are bringing artificial intelligence to the masses. OPenAI is not just out to make money and corporate profits through maintenance and support; it is out to expand AI beyond its current state. Google, Facebook and others all have been doing it.

Machine learning has been in the lab for years and the technology used is often open source. As a result, businesses can leverage the resources that can deal with these increasingly large and complex data sets of information.

Neural networks

Neural Networks and Deep Learning are two AI approaches that warrant their own discussion. Before we do that, let’s take a look at an example of what humans do very well.

Humans can easily see the numbers 504192 shown in Figure 21

Figure 21 - The Numbers to find

This might seem trivial, but it takes a lot of neurons in our visual cortex to make it seem like child’s play. The question is how do you write a program to recognize these numbers? To solve this problem, neural networks attacked the problem by taking a large number of samples of hand written numbers and letters and trained itself using numbers similar to those shown in

Figure 22.

2016 EMC Proven Professional Knowledge Sharing 35

Figure 22 - The numbers to find

Neural networks use inference rules to autonomously look for samples. This is where big data comes in. If you have a large training set, the accuracy of its ability to read in this case, becomes so much better. So, how do you build a neural network? It has to do with Perceptron’s and Sigmund Neurons.

Perceptron’s

Perceptron’s make a neural network do its job. A perceptron takes many binary inputs, x1,x2,…x1,x2,…, and produces a single binary output as shown in Figure 23:

Figure 23 - A Perceptron

The key of these perceptron’s is the concept of weights. By introducing

weights, w1,w2,…w1,w2,…, which are real numbers, by advocating the importance

2016 EMC Proven Professional Knowledge Sharing 36

through these weights, enables the algorithm to adjust its output based on its learning set. To say it another way, the perceptron makes decisions based on “its perception” which is based on weighing the data. The human brain is a lot more complex that this example, but if one extends this concept model to hundreds or thousands of perceptron’s and takes the outputs as shown in Figure 24, one has the basis for a neural network. Looking from left to right, if the outputs of the first layer of perceptron’s are the inputs of the second layer, the perceptions or weighting become more granular and precise offering higher levels of accuracy:

Figure 24 - Multiple Perceptrons

What is Deep learning?

Networks with multi-layer structures that have two or more hidden layers are called deep neural networks. The way these networks are created is through deep learning techniques that use stochastic gradient descent and backpropagation that train neural networks. The more layers the better; the reason is deep neural networks can build a complex hierarchy similar to how our brains do it. Moving from a simple comparison based on weights and comparisons to more complex analysis yields the best results.

2016 EMC Proven Professional Knowledge Sharing 37

Affective Computing

Do you think computers can be emotional? This is the art of Affective Computingxv with the intent of relating and influencing ones emotionsxvi. It builds on engineering and computer science with psychology, cognitive science, neuroscience, sociology, education, psychophysiology, value-centered design, ethics, and more. The goals are to:

1. Build systems that can communicate affective-cognitive states through body sensors

2. Analyze parallel channels of information

3. Build systems that can evaluate frustration, stress, and mood through conversation and interaction 4. Implement technologies for improving self-awareness

5. Pioneer studies examining ethical issues in affective computing.

Emotions are the cornerstone of experiences Humans have and can influence perception and cognition. They can also affect learning styles such as how one communicates and how one makes decisions, and everyday tasks such as learning, communication, and even rational decision-making. Research is being done to provide new technologies and approaches to use AI and big data to achieve a better understanding on how systems can provide a better experience. Wearable sensors are being developed with this form of technology that can offer new ways to reduce frustration and avoid stressful situations.

We bring together individuals with a diversity of technical, artistic, and human abilities in a collaborative spirit to push the boundaries of what can be achieved to improve human affective experience with technology.

New hardware

We know that AI and deep learning is built on algorithms but, for them to run, at some point you will need hardware. There have been some amazing advancements in hardware architectures over the last few years. We have already discussed GPUs, but let’s take a look at other advancements.

2016 EMC Proven Professional Knowledge Sharing 38

Neuromorphic chips

The conventional architecture that has been driving the technology for many years is called Von Neumann Architecture and it is running into fundamental limits. Computer chips that use this architecture often hit what is referred to as the von Neumann bottleneck, which is a limit on the throughput between the processor and memory as shown in Figure 25. Although processor speeds have increased, there hasn’t been a corresponding increase in the transfer rates for accessing memory, only in the density of the memory. How much power the chip consumes is another consideration when designing.

Why is there a renewed interest in neural networks and finding better ways to build them? Many existing implementations or simulations of neural networks are built on von Neumann architectures that perform operations sequentially but, true neural networks, such as the human brain, have the ability to process many operations in parallel. Implementing complex functionality such as facial or image recognition on a sequential system requires high processing power often from expensive systems such as supercomputers.

Figure 25 - Architectural Differences

Are there computer chips that are better suited for trying to emulate neural networks? Yes, they are called neuromorphic chips and they have been available for more than a dozen years but their use has been limited until recently. The first commercially available neuromorphic chip became available from IBM in 1993 due to a collaborative effort: “A small

2016 EMC Proven Professional Knowledge Sharing 39

independent team approached IBM with an idea to develop a silicon neural network chip called the ZISC (Zero Instruction Set Computer); (this) team had prior experience building software neural networks for pattern recognition for a particle smasherxvii”. The neuromorphic chip was sold for eight years by IBM and General Vision Inc. but in 2001 IBM discontinued their involvement with the chip. General Vision continued with development because they believed the chip would have practical benefits. “In 2007 they launched the Cognitive Memory 1000 (CM1k): a neuromorphic chip with 1,024 artificial neurons working in parallel while consuming only 0.5 Watts and able to recognize and respond to patterns in data (images/code/text/anything) in just a few secondsxviii”. That work has resulted in other important developments being made.

SyNAPSE Program

In 2008, the Defense Advanced Research Projects Agency (DARPA) announced the SyNAPSE program, which stands for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. The goal of the program is “to help develop electronic neuromorphic machine technology that scales to biological levels … The ultimate aim is to build an electronic microprocessor system that matches a mammalian brain in function, size, and power consumption. It should recreate 10 billion neurons, 100 trillion synapses, consume one kilowatt, and occupy less than two liters of spacexix.”

When the program started, contracts were awarded to both IBM and HRL Labs, who work with some universities, to do research and development for neuromorphic chips.

IBM’s TrueNorth Neuromorphic “Brain” Chip

In 2014, IBM announced a new neuromorphic chip called TrueNorth. The design of this chip is different from the typical von Neumann architecture and is more aligned with how the brain works. “TrueNorth has a parallel, distributed, modular, scalable, fault-tolerant, flexible architecture that integrates computation, communication, and memory and has no clock. It is fair to say that TrueNorth completely redefines what is now possible in the field of brain-inspired

xx computers, in terms of size, architecture, efficiency, scalability, and chip design techniques.”

According to the IBM article, Von Neumann architecture, as shown in Figure 26 - Traditional vs. Neurosynaptic , has similarities to the human left-brain with its fast, symbolic, and number- crunching capabilities. In contrast, the TrueNorth chip has similarities to the right-brain with its ability to sense things and recognize patternsxxi.

2016 EMC Proven Professional Knowledge Sharing 40

Figure 26 - Traditional vs. Neurosynaptic

Dharmendra Modha, an IBM Fellow, describes the following for TrueNorth: “The architecture can solve a wide class of problems from vision, audition, and multi-sensory fusion, and has the potential to revolutionize the computer industry by integrating brain-like capability into devices xxii where computation is constrained by power and speed”

As shown in Figure 27 - AI System Example, IBM has built in a lab environment that is a digital equivalent of a small rodent’s brain using TrueNorth chips, which are designed to behave like neurons. They recently held a session to allow researchers from both academia and the government to have access to their neuro-network, which has resulted in “software that can identify images, recognize spoken words, and understand natural language. Basically they’re using the chip to run deep learning algorithms, the same algorithms that drive the internet’s latest AI services, including face recognition on Facebook and the instant language translation on Microsoft’s Skype”.

2016 EMC Proven Professional Knowledge Sharing 41

Figure 27 - AI System Example

“The promise is that IBM’s chips can run these algorithms in smaller spaces with considerably less electrical power, letting us shoehorn more AI into phones and other tiny devices, including hearing aids and, well, wristwatches.” The work is promising especially since the architecture is simpler and it consumes less power, but there is still much work and research to be donexxiii.

Another interesting point: “It's also important to mention that data isn't being sent back and forth over the TrueNorth network. Instead, companies are able to upload their own deep-learning model to a data server, then run the algorithm.” (http://www.wired.com/2015/08/ibms-rodent- brain-chip-make-phones-hyper-smart/ by Cade Metz, 08/17/2015)

How will AI change the world?

We have shown that AI and machine learning is popping up everywhere in our lives. The big question is how will AI change the world? This next section will take a look at some of the advancements that are possible and how AI will change our lives. Most of these changes I believe are good. However, some aspects could have negative effects.

2016 EMC Proven Professional Knowledge Sharing 42

The Master Algorithm It all comes down to the fact that machine learning is all about getting the computer to learn on its own. Many believe that the real way to do that is to develop the “Master Algorithm”. The next question should be what is it and how long until we get it? A Master Algorithm is a process that must be able to study anything from data. As discussed, there are many ways we can get AI to learn either by emulating how the brain works or other approaches. Many believe this Master algorithm will be a combination of many approaches that ultimately derive the Master Algorithm. For example, if you give the master algorithm information on how the planets, stars and asteroids move, it can and will figure out what Sir Isaac Newton found out about how the laws of gravity worked.

Another example might be that if you give it DNA information, it will determine all the details of the Double Helix. Cancer has been with us from the dawn of time; what if the master algorithm could really cure cancer by looking at the data from billions of people and figure out better ways to diagnose it and ultimately eradicate it! The idea is getting humans out of the loop of diagnosis and analysis and allows AI machines that can learn to access huge amounts of information and the complex processes that go with it. By doing so, it would deliver on the promise of better drugs and procedures than people can do in a lab environment.

For example, one virus that is elusive in finding a cure is AIDS. The reason why this type of virus is so hard to kill is it tends to mutate quickly. What worked last week will not work today. What researches are finding out is if you attack the virus from many different directions 2016 EMC Proven Professional Knowledge Sharing 43

and do it quickly based on the latest mutation data and environmental factors, medicine can be created to kill it.

The problem in doing this is it takes the processing power of a data center and the problem solving expertise of a thousand doctors to quickly act on it. If computers only had the learning skills of a human, we could spawn the equivalent number of doctors we need to get the job done. Just call it your Doctor Data Center; this approach, by the way, is similar to how cancer cures are being addressed. On the topic of cancer, the way it would be done for this disease is since each drug is customized to a particular cancer, no individual drug is going to cure everything. So, the AI system, using Machine learning, could take the cancer fingerprint or Genome, the patents genome, medical, environmental and physical data analytics and pick the right drug or set of drugs to kill the cancer. Another further step would be to actually create a new drug that is customized to the patient. With the wonders of custom manufacturing these days, like 3D printing, this option is not far away.

This leads to another problem; the sharing of data. Today, given the rights of confidentiality of personal medical records, getting the data is difficult. Hopefully there will be ways to obtain this data confidentiality through aggregate methods that would make this all possible.

2016 EMC Proven Professional Knowledge Sharing 44

Where the jobs will truly be

This has been discussed previously, but it’s important to understand what the future will hold since I am assuming we all want jobs. Or do we? What if I told you that the jobs of tomorrow are not high-educated professions like traditional medical doctors or lawyers? This is a very controversial statement.

The reason why this is the case is because doctors and lawyers do the same things over and over again, what is known as repetitive tasks. Anything that is repetitive in nature, computers can do so much better. I am not saying things like hammering a nail, but any process that can be figured out with lots of data and a great algorithm. Looking toward the future, jobs that will not go to AI self-learning robots are things like construction work, cleaning bathrooms and fixing mechanical equipment.

For example, getting back to the battle of eradicating cancer through a good diagnosis, a machine-learning computer can take a relatively simple algorithm with a data lake of patient records and learn to diagnose lung cancer or breast cancer that is more effective than any human doctor. If a profession requires routine or repetitive mental work and in many cases, medical doctors diagnose the same thing, especially specialists, that is a red flag to turn to an AI computer.

What then is a job the computer can’t do, at least in the foreseeable future? The answer is any job that involves interacting with the world around us. If you need your arms, feet, ears, hands to do a job that can change ever so slightly and needs common sense, which is

2016 EMC Proven Professional Knowledge Sharing 45

really what makes us human, AI computers have a hard time doing that. We often forget this fact of common sense since for us, it’s innate and we do it with such ease.

National security and AI

What is changing the world today are the many issues of security, especially national security for individual countries. AI and Machine learning could be construed as a threat to national security. After all, the bad guys could use this form of technology. But, let’s face it, we all know the NSA and other US government agencies are using it. If the NSA were allowed to see every data communication in the world, the problem is how is it possible to analyze every possible communication, even if a computer has specific things to listen for key phrases or other Meta data? Again, AI and Machine learning has an answer for this challenge. Obviously, the NSA does not have an army of analysis’s, but with machine learning algorithms, as it listens to conversations, not only will it pick up key words, it will understand and learn the conversations in real time. It will in effect do what human analysts do using thousands of instances of AI systems. This brings up a point that if we are not careful, any country could become a surveillance state, which could yield negative consequences.

The upside however, is AI and Machine learning can put a lot of empowerment in our hands. AI and Machine learning is being open sourced, so similar to network security such as PGP (Pretty Good Privacy) and other security encryption methods, AI can be a tool to help the individual keep our privacy. The important point here is we all need to know what AI and Machine learning can do and make sure it’s being used for the best for humanity.

2016 EMC Proven Professional Knowledge Sharing 46

What does the Future Hold?

Please Launch Part 2 of the video attachment. To see it, run the part 2 of video in the PDF or the video file that is with this paper…

Summary

Many believe that AI and deep learning will solve some of the hardest problems in the world. It will allow humans to address issues of waste management, population growth, climate change, education and many others.

AI and big data go together and the fuel is cloud computing. There is rapid change in AI which means this technology can help engineers and scientists understand the subtle links between the cause and effects of what we do every day. With the vast quantities of data being gathered, AI will be the team member that will guide us through the massive amounts of information.

AI can also help companies and businesses in the design of personal assistants. Humans will partner with AI systems and the symbiosis will hopefully be amazing. Companies like Google and Facebook are investing time and money on AI and they see the importance of it. Google used to be a search company. Now, they are becoming an “Intelligence” company that happens to have a lot of data about the world and us. Other businesses are not standing still. Microsoft, IBM, Baidu Inc. and others are also investing in major areas of AI and deep learning.

2016 EMC Proven Professional Knowledge Sharing 47

AI is becoming so important to businesses and the world that the algorithms and data are becoming open. When that happens, standardized approaches will be created to push the curve of accelerated learning techniques even further. HAL has come a long way and I will always remember being in awe watching the movie 2001: A Space Odyssey, with our amazing AI computer and deep learning the song “Daisy”xxiv. Listen to the song now - https://www.youtube.com/watch?v=OuEN5TjYRCE

Author’s Biography

Paul Brant is a Senior Education Technology Consultant at Dell EMC in the Emerging Trends Group based in Franklin MA. He has over 29 years’ experience in semiconductor design, board level hardware and software design, as well as IT technical pre-sales solutions selling as well as marketing, and educational development and delivery. He also holds a number of patents in the data communication and semiconductor fields. Paul has a Bachelor (BSEE) and Master’s Degree (MSEE) in Electrical Engineering from New York University (NYU) as well as a Master’s in Business Administration (MBA), from Dowling College, located in Suffolk County, Long Island, NY. In his spare time, he enjoys his family of five, bicycling, and other various endurance sports. Certifications include EMC Proven Cloud Architect Expert, Technology Architect, NAS Specialist, VMWare VCP5, Cisco CCDA, CompTIA Security +.

2016 EMC Proven Professional Knowledge Sharing 48

Appendix A – References

i http://www.vanityfair.com/news/2016/01/mark-zuckerberg-resolution-ai-max

ii http://singularityhub.com/2015/10/12/ray-kurzweils-wildest-prediction-nanobots-will-plug-our- brains-into-the-web-by-the-2030s/ iii Rizzolatti, G. and Craighero, L. 2004. The mirror-neuron system. Annual Review of Neuroscience 27, pp. 169–192. iv . (Schultz, Dayan and Montague 1997; Brown, Bullock and Grossberg 1999; Seymour et al. 2004). v http://www.ml.com/publish/content/application/pdf/GWMOL/AR9D50CF-MLWM.pdf vi http://mobile.businessinsider.com/almost-half-of-all-us-workers-could-lose-their-jobs-to-robots- 2015-11 vii http://www.theguardian.com/technology/2015/nov/05/robot-revolution-rise-machines-could- displace-third-of-uk-jobs viii http://www.wired.com/insights/2014/02/artificial-intelligence-way-forward-personal-finance/ ix https://pressroom.usc.edu/words-can-deceive-but-tone-of-voice-cannot/ x http://www.hlt.utdallas.edu/~vgogate/tutorials/bd-talk.pdf xi http://www.wired.co.uk/news/archive/2015-09/01/art-algorithm-recreates-paintings xii http://www.theguardian.com/technology/2015/jun/18/google-image-recognition-neural- network-androids-dream-electric-sheep xiii http://www.bloomberg.com/features/2015-preschool-for-robots/ xiv https://openai.com/blog/introducing-openai/ xv http://affect.media.mit.edu/ xvi http://www.inc.com/issie-lapowsky/4-big-opportunities-artificial-intelligence.html xvii www.artificialbrains.com/darpa-synapse-program xviii www.artificialbrains.com/darpa-synapse-program xix www.artificialbrains.com/darpa-synapse-program xx http://www.research.ibm.com/articles/brain-chip.shtml xxi http://www.research.ibm.com/articles/brain-chip.shtml xxii http://m.ibm.com/http/research.ibm.com/cognitive-computing/neurosynaptic-chips.shtml xxiii (http://www.techtimes.com/articles/77610/20150818/ibm-creates-neuromorphic-chip- powerful-rodents-brain.htm xxiv http://www.blastr.com/2011/07/little known_sci_fi_fact.php 2016 EMC Proven Professional Knowledge Sharing 49

Dell EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” DELL EMC MAKES NO RESPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying and distribution of any Dell EMC software described in this publication requires an applicable software license.

Dell, EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries.

2016 EMC Proven Professional Knowledge Sharing 50