Quick viewing(Text Mode)

Deep Learning Explained What It Is, and How It Can Deliver Business Value to Your Organization

Deep Learning Explained What It Is, and How It Can Deliver Business Value to Your Organization

DEEP LEARNING EXPLAINED WHAT IT IS, AND HOW IT CAN DELIVER BUSINESS VALUE TO YOUR ORGANIZATION

CHAPTER 1 | What is the difference between artificial intelligence, , and ?

NVIDIA DEEP LEARNING | 2 ARTIFICIAL INTELLIGENCE IS...

THE FUTURE FICTION PART OF OUR EVERYDAY LIVES

NVIDIA DEEP LEARNING | 3 Since an early flush of optimism in the 1950s, smaller subsets of artificial intelligence – the first machine learning, then deep learning, a subset of machine learning – have created ever larger disruptions.

When DeepMind’s AlphaGo later, and finally deep learning – which program defeated South Korean is driving today’s AI explosion – fitting Lee Se-dol in the ‘Go’ this inside both. year, the terms AI, machine learning, and deep learning were used in the Since an early flush of optimism in media to describe how DeepMind won. the 1950s, smaller subsets of artificial And all three are part of the reason why intelligence – the first machine AlphaGo trounced Lee Se-Dol. But they learning, then deep learning, a subset are not the same things. of machine learning – have created ever larger disruptions. The easiest way to think of their relationship is to visualize them as concentric circles with AI -- the idea that came first – the largest, then machine learning – which blossomed

NVIDIA DEEP LEARNING | 4 The easiest way to think of their relationship is to visualize them as concentric circles...

ARTIFICIAL INTELLIGENCE Early artificial intelligence stirs excitement. MACHINE LEARNING Machine learning begins to flourish. DEEP LEARNING Deep learning breakthroughs drive AI boom.

000101010010101100100 110101101011010010110 001011010101001010111 101010100101010101011 110101011001010101010 010110100101010101010

1950’s 1960’s 1970’s 1980’s 1990’s 2000’s 2010’s

NVIDIA DEEP LEARNING | 5 AI HAS BEEN PART OF OUR IMAGINATIONS AND SIMMERING IN RESEARCH LABS … since a handful of computer scientists Over the past few years artificial rallied around the term at the intelligence (AI) has exploded, and Dartmouth Conferences in 1956 and especially since 2015. Much of that has birthed the field of AI. to do with the wide availability of GPUs that make parallel processing ever In the decades since, AI has faster, cheaper, and more powerful. alternatively been heralded as the key to our civilization’s brightest future, and It also has to do with the simultaneous tossed on technology’s trash heap as one-two punch of practically infinite a harebrained notion of over-reaching storage and a flood of data of every propeller-heads. stripe (that whole Big Data movement) – images, text, transactions, mapping Frankly, until 2012, it was a bit of both. data, you name it.

NVIDIA DEEP LEARNING | 6 Computer scientists have moved from something of a bust – until 2012 – to a boom that has unleased applications used by hundreds of millions of people every day.

19,439

Higher Education Development Tools Internet 1,549 Automotive Finance Government Life Science 2014 2016 Other ORGANIZATIONS ENGAGED WITH NVIDIA ON DEEP LEARNING READ Ovum’s Report on how AI solutions can be applied to today’s business use cases

NVIDIA DEEP LEARNING | 7 Back in that summer of ‘56 conference the dream of those AI pioneers was to construct complex – enabled by emerging computers – that possessed the same characteristics of human intelligence.

This is the that we of as “General Computer programs that played checkers AI” – fabulous machines that have all of were among the earliest our senses (maybe even more), all our examples of Artificial reason, and think just like we do. Intelligence, stirring an early wave of excitement You’ve seen these machines endlessly in in the 1950s. movies as friends – C3PO – and foe – The Terminator.

General AI machines have remained in the movies and science fiction novels for good reason; we can’t pull it off, at least not yet.

NVIDIA DEEP LEARNING | 8 What we can do falls into Machine Learning “trained” using large the concept of “Narrow amounts of data and AI.” Technologies that are at its most basic that give it able to perform specific is the practice of the ability to learn how to tasks as well as, or better using algorithms perform the task. than, we humans can. Examples of narrow AI to parse data, Machine learning came are things such as image learn from it, directly from minds of the classification on a service early AI crowd, and the like Pinterest and face and then make a algorithmic approaches recognition on . determination or over the years included learning, Those are examples of prediction about inductive logic Narrow AI in practice. something in the programming, clustering, These technologies , exhibit some facets of world. and Bayesian networks human intelligence. among others. So rather than hand- But how? Where does coding software that intelligence. But routines with a specific how? Where does that set of instructions to intelligence come from? accomplish a particular That gets us to the next task, the machine is circle, Machine Learning.

NVIDIA DEEP LEARNING | 9 ONE OF THE VERY BEST APPLICATIONS AREAS FOR MACHINE LEARNING FOR MANY YEARS WAS though it still required a great deal part of it. There’s a reason computer of hand-coding to get the job done. vision and image detection didn’t come People would go in and write hand- close to rivaling humans until very coded classifiers like edge detection recently, it was too brittle and too prone filters so the program could identify to error. where an object started and stopped; shape detection to determine if it had Time, and the right learning algorithms eight sides; a classifier to recognize the made all the difference. letters “S-T-O-P.”

From all these hand-coded classifiers they would develop algorithms to make sense of the image and “learn” to determine whether it was a stop sign. Good, but not mind-bendingly great. Especially on a foggy day when the sign isn’t perfectly visible, or a tree obscures NVIDIA DEEP LEARNING | 10 Deep Learning – A Technique for Implementing Machine Learning

NVIDIA DEEP LEARNING | 11 between the neurons. final output is produced. But, unlike a where any neuron can connect to Each neuron assigns a any other neuron within a weighting to its input – certain physical distance, how correct or incorrect these artificial neural it is relative to the task networks have discrete of being performed. layers, connections, The final output is then and directions of data determined by the total propagation. of those weightings. So think of our stop sign You might, for example, example. Attributes of a Another algorithmic take an image, chop it stop sign are chopped up approach from the early up into a bunch of tiles and “examined” by the machine – learning that are inputted into the neurons – its octagonal crowd, Artificial Neural first of the neural shape, its fire-engine Networks, came and network. In the first layer red color, its distinctive mostly went over individual neurons, then letters, its traffic sign the decades. Neural passes the data to a size, and its motion or networks are inspired by second layer. The second lack thereof. our understanding of the layer of neurons does its biology of our – all tasks, and so on, until, The neural network’s task those interconnections the final layer and the is to conclude whether

NVIDIA DEEP LEARNING | 12 this is a stop sign or since the earliest days ...it wasn’t until not. It come up with a of AI, and had produced “ vector,” really very little in the way GPUs were a highly educated guess, of “intelligence.” The based on the weightings. problem was even deployed in the In our example the the most basic neural effort that the system might be 86% networks were very confident the image is a computationally promise was stop sign, 7% it’s a speed intensive, it just wasn’t realized. limit sign, and 5% it’s a a practical approach. kite stuck in a tree, and Still, a small heretical so on – and the network research group led by architecture then tells the at the neural network whether kept its right or not. at it, finally parallelizing the algorithms for Even this example is supercomputers to run getting ahead of itself, and proving the concept, because until recently but it wasn’t until GPUs neural networks were were deployed in the all but shunned by the effort that the promise AI research community. was realized. They had been around

NVIDIA DEEP LEARNING | 13 If we go back again to our stop sign has taught itself what a stop sign looks example, chances are very good that like; or your mother’s face in the case as the network is getting tuned or of Facebook; or a cat, which is what “trained” it’s coming up with wrong did in 2012 at Google. answers – a lot. What it needs is training. Ng’s breakthrough was to take these neural networks, and essentially make It needs to see hundreds of thousands, them huge, increase the layers and even millions of images, until the the neurons, and then run massive weightings of the neuron inputs are amounts of data through the system to tuned so precisely that it gets the train it. answer right practically every time – for or no fog, sun or rain. It’s at that point that the neural network NVIDIA DEEP LEARNING | 14 In Ng’s case it was images from 10 million YouTube videos. Ng put the “deep” in deep learning, which describes all the layers in these neural networks.

Today, image recognition by machines trained via deep learning in some scenarios is better than humans, Andrew Y. Ng’s lecture on deep learning, self-taught learning, and unsupervised . and ranges from cats to identifying indicators for cancer in blood and tumors in MRI scans. Google’s AlphaGo learned the game, and trained for its ‘Go’ match – it tuned its neural network – by playing against itself over and over.

NVIDIA DEEP LEARNING | 15 AI IS THE PRESENT AND THE FUTURE. With Deep Learning’s help, AI may even get to that science fiction state we’ve so long imagined. MICHAEL V. COPELAND AUTHOR

A long-time journalist based in Silicon Valley, Michael has been in the thick of technological change since the web took hold. Writing and editing for such outlets as WIRED, Fortune, and Business 2.0, Michael has been a part of identifying and explaining some of the most monumental (and monumentally stupid) trends in technology from the time Netscape was a thing. He helped lead editorial efforts as a partner at venture capital firm, Andreessen Horowitz and is now a partner at Story Made Good.