Quick viewing(Text Mode)

Cognitive Computing

Cognitive Computing

AI in 45 minutes – How Shapes our Future of Production

Aachener ERP-Tage 2016 Planung und Regelung 4.0 – Das Zusammenwachsen von ERP und MES

June 16th, 2016

Univ.-Prof. Dr. rer. nat. Sabina Jeschke

IMA/ZLW & IfU Faculty of Mechanical Engineering RWTH Aachen University

www.ima-zlw-ifu.rwth-aachen.de Outline 2

I. Introduction . The rise of AI… and its relation to 4.0 . Entering the scene: intelligent self-learning systems II. The basics of and their applications . Data-driven methods: supervised and . Trial-and-error driven methods: . Probabilistic engines . – a powerful tool for “both sides” . Where the story goes: AlphaGo and other stories . Machines getting creative III. The brain projects . To be or not to be …a bird! . The death of Moore’s law . The limitations of the . Neuromorphic computing IV. Summary and Outlook . The concept of . The embodiment theory and its implications for your “colleague the ” . The END!

11.05.2016 S. Jeschke … leading to the 4th industrial (r)evolution... Breakthroughs - A new era of artificial intelligence 3

Communication technology Embedded systems bandwidth and computational power miniaturization Semantic technologies Artificial intelligence 2011 information integration behavior and decision support

Google Car 2012

 Systems of “human-like” complexity

11.05.2016 S. Jeschke … leading to the 4th industrial (r)evolution... Breakthroughs - Everybody and everything is networked 4

Communication technology Embedded systems bandwidth and computational power miniaturization

Semantic technologies Artificial intelligence information integration behavior and decision support

Car2Infra- Swarm structure Robotics

Team Robotics

Smart Smart Grid Factory

11.05.2016 S. Jeschke The fourth industrial (r)evolution “Information Revolution” 5

Everybody and everything is networked. - Big Data & Cyber-Physical Systems

“Internet of Things & Services, M2M or Cyber Physical Systems are much more than just buzzwords for the outlook of connecting 50 billions devices by 2015.” Dr. Stefan Ferber, Bosch (2011)

Vision of Wireless Next Generation System (WiNGS) Lab at the University of Texas at San Antonio, Dr. Kelley Weidmüller, Vission 2020 - Industrial Revolution 4.0 Intelligently networked, self-controlling manufacturing systems)

„local“ „local“ to „global“ to „global“

around 1750 around 1900 around 1970 today 1st industrial revolution Power revolution Digital revolution Information revolution Mechanical production Centralized electric power Digital computing and Everybody and everything is systematically using the infrastructure; mass production communication technology, networked – networked power of water and steam by division of labor enhancing systems’ intelligence information as a “huge brain”

11.05.2016 S. Jeschke … towards a networked world And how do these systems work? 6

Communication technology Embedded systems bandwidth and computational power miniaturization

Semantic technologies information integration ?? Steering - Controlling ??

Towards intelligent and (partly-) autonomous systems AND systems of systems around 1750 around 1900 around 1970 today 1st industrial revolution Power revolution Digital revolution Information revolution Mechanical production Centralized electric power Digital computing and Everybody and everything is systematically using the infrastructure; mass production communication technology, networked – networked power of water and steam by division of labor enhancing systems’ intelligence information as a “huge brain”

11.05.2016 S. Jeschke Outline 7

I. Introduction . The rise of AI… and its relation to 4.0 . Entering the scene: intelligent self-learning systems II. The basics of machine learning and their applications . Data-driven methods: supervised and unsupervised learning . Trial-and-error driven methods: neuroevolution . Probabilistic engines . Deep learning – a powerful tool for “both sides” . Where the story goes: AlphaGo and other stories . Machines getting creative III. The brain projects . To be or not to be …a bird! . The death of Moore’s law . The limitations of the Von Neumann architecture . Neuromorphic computing IV. Summary and Outlook . The concept of cognitive computing . The embodiment theory and its implications for your “colleague the robot” . The END!

11.05.2016 S. Jeschke Towards machine learning Machines and learning 8

Can machines learn? ? Can they learn to predict future states and to do tasks optimized and in the right way? And if so, how can they do it?  This is what this talk is about!

 How do machines learn?

A – B – Learning by observations Learning by doing and explanations

 Data-driven  Trial-and-error learning learning

Let us take a look into a first example of ! data-driven learning! ! Later…

11.05.2016 S. Jeschke Data-driven learning - supervised A first example – learning from guided observations 9 ! Do you remember your childhood heroes – “The Mario Brothers” by Nintendo? So let us write down our observations (and gather some training data)

pos_x on_ground action status jump (B) 563 yes jump (B) alive (1) 571 yes jump (A) alive (1) 580 yes walk right dead (0) walk right 582 no jump (A) dead (0) … … … …

We want to learn general rules how to survive in this situation - by using data –  and visualize it in a decision tree yes > 560 and < 575 jump (B) on_ground pos_x action no c = 100% jump (A) confidence (c) = 50%

c = 75% c = 75%

11.05.2016 S. Jeschke Data-driven learning - supervised down-to-earth 10 Can we predict the result of a HPDC (high-pressure die casting) process – ? by using historical data? - YES WE CAN! … in cooperation with

IO NIO (Outbreak) NIO (Cold shot)

NIO (Blowhole)

HPDC process Historic data Prediction model Visualization of prediction Process and Modelling and Inline and web-based quality data training (result NIO|IO with reason)

We extended the prediction model by integrating  . mechanical vibration (using solid-borne sound sensors) . weather data. Acoustic measurements Extended model Weather data Fourier transformation & k-nearest clustering and Temporal correlation of feature extraction random forest tree weather (and circumstances)

11.05.2016 S. Jeschke Data-driven learning - unsupervised A second example – what if we do not tell what is right 11 What if we do not know if an observation belongs to a specific category? ? Or, if an observation is good or bad?  Finding the hidden structure in data! “Although it may seem somewhat mysterious to imagine what a machine could possibly learn given that it doesn't get any feedback from its environment, it is possible to find patterns in image data using probabilistic techniques.” Zoubin Ghahramani, Professor of Information Engineering at the University of Cambridge, Machine Learning

Cleansing, preprocessing and Batch of unlabeled pictures hierarchical clustering

 Unsupervised. Human factor is reduced to modeling. (however a certain bias survives…)

11.05.2016 S. Jeschke Data-driven learning - unsupervised Unsupervised learning “down-to-earth” 12 Finding hidden relations in our data, we were not aware of, e.g. ! understanding failures or bad quality of products and processes … in cooperation with

Data about chemical compositions of [Ruiz, 2014] steel (identified as low quality - example)

. Sulfur (S) > 0.04% and heat treatment  fragile structure Searching for hidden relations in data . Phosphorus (P) > 0.04%  reduced plasticity by applying subgroup mining . Chrome (Cr) > 16%, Molybdenum (Mo) > 13%, Nickel (Ni) > 56%  no findings . …

11.05.2016 S. Jeschke Learning by doing – The next step: Using rewards to learn actions 13 Remember Mario: What if the machine could learn, how to solve a level? ? Why not use a some kind of intelligent trial-and-error?

Neuroevolution of augmenting

topologies (NEAT) [Stanley, 2002] . Genetic algorithms on top of neural networks . At each state the system decides what action to do . Actions are rewarded if Mario do not die in return . Level progress by evolving neural [SethBling, 2015] networks

Reinforcement learning (R-learning) Human factor is “very small” is inspired by behaviorist psychology – . reduced to very general, mainly  maximizing the expected return by applying  formal specifications of the neural a sequence of actions at a current state. network… . However, human still influences the  can be applied to broad variety of problems underlying representation model

11.05.2016 S. Jeschke Learning by doing Reinforcement learning “down-to-earth” 14 Obviously: Super-Mario can easily be extended ! towards intralogistics scenarios… [TU Delft, 2012] [MiorSoft (reexre), 2014]

[UC Berkeley, 2015]

… for learning and optimization of motions [UC Berkeley, 2015]

[Intelligent Autonomous Systems, 2015] … for learning and executing complete assembly tasks Should Google have crashed 10.000 cars before coming up with first „ok- solutions“ for autonomous driving?  Coupling to embodiment theory

… as “pro-training” for human- Avoiding “nonsense solutions” by machine interaction using simulation environments

11.05.2016 S. Jeschke Learning by doing The “Kindergarten for ” 15 Transferring human/biological learning processes into all areas - ! combined with the power of cooperation, “collective learning” A new approach: object manipulation by “trial-and-error” . approach is goal-centric (not insight-oriented!) . two components: 1. a grasp success predictor, which uses a deep convolutional neural network (CNN) to determine the success potential a given motion 2. a continuous servoing mechanism, that uses the CNN to continuously update the robot’s motor “Learning Hand-Eye Coordination for Robotic commands (feedback loop) Grasping with Deep Learning and Large-Scale . trained using a dataset of over 800,000 grasps Data Collection” [Levine et.al., Google, 02/2016] . collected using a cluster of 14 similar (but not identical !!) robotic manipulators

object manipulation “up to today” . humans and animals: fast feedback loop between and action;  even very complex manipulation tasks can be performed without advance planning . robotic manipulation: relies heavily on advance planning and analysis; with relatively simple feedback, such as trajectory following (results often slow and unstable, non-adaptive)

11.05.2016 S. Jeschke Deep learning The age of deep learning (deep neural networks) 16 “Today, computers are beginning to be able to generate human-like insights into ! data…. Underlying … is the application of large artificial neural networks to machine learning, often referred to as deep learning.” [Cognitive Labs, 2016] Deep Q-Networks (also "deep reinforcement learning“,  Q refers to the mathematical action-prediction-function behind the scenes….): Learning directly from high-dimensional sensory input

[Minh, 2015] [nature, 2015]  AI starts to develop strategies to beat the game  Signs of “body cousciousness”  Human factor practically zero.  …

11.05.2016 S. Jeschke Deep learning Deep learning “down-to-earth” 17 ! … a variety of practical applications

Face/picture/object recognition  Central part Important feature for autonomous driving etc. of “cognitive computing”

Handwriting recognition Natural language processing Anomaly recognition Automated translation

11.05.2016 S. Jeschke Probabilistic engines The new probabilistic engines 18

? Back to Watson: how is this guy running the (Jeopardy!) show??

. 90 IBM-Power-750 servers . For each: a 3.5 GHz POWER7 , with 8 cores, and 4 threads per core . In total: 2.880 POWER7 threads . 16 terabytes of RAM

Generation of Linguistic possible Evaluation of preprocessing candidates candidates

DeepQA architecture . Purely based on natural language processing (NLP) . Approx. 100 different AI/linguistic methods come into play . Without any specific semantic representation (“as-is”)

11.05.2016 S. Jeschke Probabilistic engines Probabilistic engines “down-to-earth” 19 Watson: from playing Jeopardy! towards ! becoming some kind of a “medical doctor”…

. today, only 20% of the medical knowledge is evidence based (basis of individualized medicine) . also, amount of medical information is doubling every 5 years: physicians can’t read all the journals . Data: all types, . Goal of Watson: help physicians in diagnosing 1. structured data from electronic medical record and treating patients by analyzing large data databases and . acting as a huge preprocessor for all kind of 2. unstructured text from physician notes and medical information published literature . potential to transform health care into individual medicine  How can we deal with these challenges? . currently tested by several clinics, e.g. Mayo, MD Anderson, Cleveland, and Sloan-Kettering

“IBM's Watson is better at diagnosing cancer than human doctors” . Example “p53”: Watson identified possible treatments for protein p53 linked to many cancers  . Example “Google Flu” (another engine): already now, doctors integrate the results of GoogleFlu (spreading and direction of contagious illnesses) as it is much faster and more precise as the results of the best medical centers in the world)

11.05.2016 S. Jeschke Deep learning Where the Story Goes: AlphaGo 20

Go originated in China more than 2,500 years ago. Confucius wrote about it. As simple as the rules are, Go is a game of ! profound complexity. This complexity is what makes Go hard for computers to play, and an irresistible challenge to artificial intelligence (AI) researchers. [adapted from Hassabis, 2016]

The problem: 2.57×10210 possible positions - that is more than the number of atoms in the  universe, and more than a googol times (10100) larger than chess.  Bringing it all together! Training set Learning non-human strategies 30 million moves recorded from AlphaGo designed by Google DeepMind, games played by humans experts played against itself in thousands of games

and evolved its neural networks; Monte Carlo 2016] [Hassabis, tree search

Creating deep neural networks March 2016: 12 network layers with millions of Beating Lee Se-dol (World Champion)

driven learning driven neuron-like connections

- AlphaGo won 4 games to 1.

(5 years before time)

Data Reinforcement learning Reinforcement Predicting the human move (57% of time) ! Achieving one of the grand challenges of AI

11.05.2016 S. Jeschke Deep learning Microsoft Visual Storytelling (SIS): machines becoming creative 21

“Creativity is a phenomenon whereby something new … is formed. The created item may be ! intangible (such as an idea, a scientific theory, a musical composition or a joke) or a physical object (such as an invention, a literary work or a painting).” [adapted from Wikipedia, last visited 5/3/2016]

. DII (descriptions for images in isolation): Traditional storytelling software . SIS (stories for images in sequence): new approach towards storytelling, including . Based on SIND - Sequential Image Narrative Dataset: 81,743 unique photos in 20,211 sequences, aligned to both descriptive (caption) and story language. . [Margaret Mitchell / Microsoft, 04/2016, together with colleagues from Facebook] Visual-Storytelling by Microsoft based on deep neural networks (convolutional neural networks)

11.05.2016 S. Jeschke Deep learning Google DeepDream: machines becoming creative 22

“Creativity is a phenomenon whereby something new … is formed. The created item may be ! intangible (such as an idea, a scientific theory, a musical composition or a joke) or a physical object (such as an invention, a literary work or a painting).” [adapted from Wikipedia, last visited 5/3/2016]

“Do Androids Dream Computational creativity (artificial creativity) … is a of Electric Sheep?” multidisciplinary endeavour that is located at the intersection  of the fields of artificial intelligence, cognitive psychology, (science fiction novel by American writer Philip K. philosophy, and the arts. [adapted from Wikipedia, last visited 5/3/2016] Dick, published in 1968)

„Can machines be creative?“ by Iamus, a computer cluster composing classical Van Gogh’s Starry Night music by genetic algorithms, concert for interpreted by Google DeepDream Turings 100th birthday [] based on deep neural networks

11.05.2016 S. Jeschke Outline 23

I. Introduction . The rise of AI… and its relation to 4.0 . Entering the scene: intelligent self-learning systems II. The basics of machine learning and their applications . Data-driven methods: supervised and unsupervised learning . Trial-and-error driven methods: neuroevolution . Probabilistic engines . Deep learning – a powerful tool for “both sides” . Where the story goes: AlphaGo and other stories . Machines getting creative III. The brain projects . To be or not to be …a bird! . The death of Moore’s law . The limitations of the Von Neumann architecture . Neuromorphic computing IV. Summary and Outlook . The concept of cognitive computing . The embodiment theory and its implications for your “colleague the robot” . The END!

11.05.2016 S. Jeschke Summary of part I What a zoo! Get me out of here… :) 24 ? Relax. Order is half of life.

Neuromorphic Machine learning computing

A – B – C– Learning by observations Learning by doing Using “biological and explanations  Trial-and-error brain structures”  Data-driven learning learning reinforcement supervised… learning un-supervised… neuroevolution neural networks k-nearest clustering genetic algorithms deep learning Q-Learning decision trees Probabilistic engines SARSA random forest trees Monte Carlo tree search WANT BETTER RESULT? – Just shake it!!

11.05.2016 S. Jeschke The “second way” About the connection between birds and AI  25

. In traditional machine learning, . In the “brain projects”, specialized “general purpose computers” are computer architectures are developed, considered. driven by biological paradigms. . These computers have no strong . These architectures are more efficient ! similarities of biological brain ! for certain tasks, but do not follow the structures. “general purpose idea” any longer. . Even if they work less effective than . Hardware and software become biological brains (so far), they have a strongly coupled. Thus, experimental enormous energy consumption. changes become more complicated.

“You don’t need to be a bird to fly.” … however, it‘s ok to be a bird to fly!

machine neuromorphic learning computing

11.05.2016 S. Jeschke Limits of general purpose computers and machine learning The limitations of the Moore’s law 26

Gordon Moore (born 1929) . co-founder and Chairman Emeritus of Corporation . before: director of R&D at Fairchild Semiconductor . education: Berkeley, Caltech, Hopkins

Moore's law (from a speech given by Moore, in 1965) … the prediction that the number of  that can be placed on the same volume and for constant costs will double every 18 months! (above: the current definition; the original used a doubling speed of one year)

5 – the magic 11.05.2016 boundary… S. Jeschke Limits of general purpose computers and machine learning Von Neumann architecture 27

The von Neumann Turing completeness: bottleneck: . colloquial: “a general-purpose . The shared single bus computer can do what a between the program turing machine can do” memory and data . And by Church’s lemma: “a memory turing machine can compute . limits the data transfer ! ‘every algorithm’” rate between CPU and  thus, Turing completeness memory means: can compute . Resulting in an everything as long as the intellectual bottleneck: algorithm is known “only one thing at a . Von Neumann architecture is time thinking” turing -complete John von Neumann (1903 – 1957) . Memory wall: CPU . a Hungarian-American mathematician, speed rises faster than physicist (quantum mechanics), inventor, the speed of transfer computer scientist, … and in the memory . Manhattan project . ENIAC project: “Great brain”, first pure electronic computer, 1946 . turing complete

11.05.2016 S. Jeschke Towards new approaches in data sciences What is neuromorphic computing? 28

“Neuromorphic engineering, also known as neuromorphic computing, … describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the .

Neuromorphic engineering is an interdisciplinary subject … to design artificial neural systems, such as vision systems, head-eye systems, auditory processors, and autonomous robots, whose physical architecture and design principles are based on those of biological nervous systems.

11.05.2016 S. Jeschke Towards new approaches in data sciences The “Brain projects” 29

Worldwide, already a strong competition has been started. ? The biggest research activities in this field are … :

Human Brain Project (HBP) . EC FET Flagship . Established : 2013 . Timeline: about 10 years . Director: Brain Initiative . Coordinator: EPFL ! . White House BRAIN Initiative (Brain . Web: www.humanbrainproject.eu Research through Advancing Innovative . Partners: 100 all over Europe, in Germany: ) Heidelberg, Jülich, … . Established: 2013 by Obama . total costs: 1.19 billion €, about half of it . Timeline: about 10 years provided by the EC ! . Directors: Cornelia Bargmann and William Newsome . Web: http://www.braininitiative.nih.gov/ . total costs: private-public mix, about 100 mio $ in 2014, …

11.05.2016 S. Jeschke Towards new approaches in data sciences Examples for neuromorphic chips 30

Intel, sensor image SpiNNaker, component of the processing architecture Project

. SpiNNaker: . Intel Reveals Spin-based Neuromorphic Chip Architecture Design with up to 300 times lower energy . Spiking: inculdes time and temporal coding computation . novel computer architecture . Involves the combined use of and . goal: to use 1 mio. ARM processors (currently (memory resistors, not constant but ! 0.5 mio) in a massively parallel computing ! depend on the history of current). In a cross- platform based on spiking neural networks bar switch lattice, lateral spin valves act as . “One Million Chips Mimic One Percent Of The neurons, and memristors act as synapses. Brain” . By Steve Furber, Univ. of Manchester, one of [a paper published by Intel in June 2012, Sharad et.al, 2012: arXiv:1206.3227] the world’s best designers

11.05.2016 S. Jeschke Towards new approaches in data sciences Timeline – results so far and the expectations in the future 31

After Moore’s law: What might be the replacement? ? A pipeline of new technologies to prolong Moore’s magic

“For some time, making transistors smaller has no longer been making them more energy-efficient; as a result, the operating speed of high- end chips has been on a plateau since the mid-2000s.” [economist, 2016]

The short history of transistors • 2004 strained silicon • 2007 metal oxides used to beat the ! effects of tunneling • 2012 finFET transistors • 2020 “gate-all-around“ transistors

Beyond changes in the design of transistors, more exotic solutions may be needed? For example  materials beyond silicon. But in the end, we just “stave off the need for something radical”. [Greg Yeric, ARM designer, 2016]

11.05.2016 S. Jeschke Towards new approaches in data sciences In a nutshell, the alternatives to the beaten paths 32

Is AI at its end because Moore is not holding much longer? . NO. New technologies will take over, however Specialized chips  there may be a gap in time… neuromorphic computing . NO, because additionally, the future may lay in more and more powerful algorithms instead of pure computational speed and power.

New materials nanotubes, cadmiumtellurid, graphene, molybdenite, …

Quantum computing New geometries make direct use of quantum- 3D architectures, … mechanical phenomena

11.05.2016 S. Jeschke Outline 33

I. Introduction . The rise of AI… and its relation to 4.0 . Entering the scene: intelligent self-learning systems II. The basics of machine learning and their applications . Data-driven methods: supervised and unsupervised learning . Trial-and-error driven methods: neuroevolution . Probabilistic engines . Deep learning – a powerful tool for “both sides” . Where the story goes: AlphaGo and other stories . Machines getting creative III. The brain projects . To be or not to be …a bird! . The death of Moore’s law . The limitations of the Von Neumann architecture . Neuromorphic computing IV. Summary and Outlook . The concept of cognitive computing . The embodiment theory and its implications for your “colleague the robot” . The END!

11.05.2016 S. Jeschke Summary The “new AI”: Cognitive Computing 34

“Cognitive computing (CC) makes a new class of problems computable. It addresses complex situations that are characterized by ambiguity and uncertainty; in other words it handles human kinds of problems. …To do this, systems often need to weigh conflicting evidence and suggest an answer that is “best” rather than “right”. Cognitive computing systems make context computable.”

“Cognitive computing systems [are] a category of technologies that uses natural language processing and machine learning to enable people and machines to interact more naturally […]. These systems will learn “Cognitive computing is the simulation and interact to provide expert of human thought processes in a assistance to scientists, engineers, computerized model…. involves self- lawyers, and other professionals in a learning systems that use data mining, fraction of the time it now takes.” and natural language processing to mimic the way the human brain works.”

11.05.2016 S. Jeschke Summary From embodiment … to humanoids 35

Embodiment theory: „intelligence needs a body“ Shadow Dexterous Hand

 The existence of a body (incl. sensors and actuators) are basic prerequisites to build experience and finally the development of intelligence.

KIT, Dillmann, SFB 588

Robonaut 2- NASA

Asimo Honda The Bongard robot – learning through Atlas 2016 – Boston Dynamics embodiment [Bongard, 2006; Lipson, 2007]

Zykov V., Mytilinaios E., Adams B., Lipson H. (2005) "Self-reproducing machines", Embodiment theory: Nature Vol. 435 No. 7038, pp. 163-164 Bongard J., et al., Resilient Machines Through Continuous Self-Modeling, Science  „different bodies = different intelligences“  314, 2006 Lipson H. (2005) "Evolutionary Design and Evolutionary Robotics", Biomimetics, CRC … leading to humanoids / humanoid components Press (Bar Cohen, Ed.) pp. 129-155

11.05.2016 S. Jeschke Summary When do you start embracing artificial intelligence? 36

Waiting for Google to take over? – Google is addressing fields as gaming, mobility, language … ! Because there, they do get the data they need. They do not have the data for production lines. – So far….

Production engineering is still somewhat hesitating and waiting, but you have the data and we have the domain ! experts (from production engineering as well as data science), so - let’s get started!

[Blomberg, 2016]

11.05.2016 S. Jeschke Summary … in five steps! 37

4.0: Revolution of (distributed) artificial intelligence AI today

4th Industrial Revolution

AI tomorrow

Innovation – a question of culture Cognitive Computing and Embodiment

11.05.2016 S. Jeschke Univ.-Prof. Dr. rer. nat. Sabina Jeschke Head of Institute Cluster IMA/ZLW & IfU phone: +49 241-80-91110 [email protected]

Co-authored by:

Prof. Dr.-Ing. Tobias Meisen Institute Cluster IMA/ZLW & IfU [email protected]

Dr.-Ing. Christian Büscher Research group leader „Production Technology“ [email protected]

Thorsten Sommer, M. Eng. Thank you! Team „Knowledge Engineering“ [email protected]

www.ima-zlw-ifu.rwth-aachen.de Prof. Dr. rer. nat. Sabina Jeschke 39

1968 Born in Kungälv/Schweden 1991 – 1997 Studies of , , Computer Sciences, TU Berlin 1994 NASA Ames Research Center, Moffett Field, CA/USA 10/1994 Fellowship „Studienstiftung des Deutschen Volkes“ 1997 Diploma Physics 1997 – 2000 Research Fellow , TU Berlin, Institute for Mathematics 2000 – 2001 Lecturer, Georgia Institute of Technology, GA/USA 2001 – 2004 Project leadership, TU Berlin, Institute for Mathematics 04/2004 Ph.D. (Dr. rer. nat.), TU Berlin, in the field of Computer Sciences 2004 Set-up and leadership of the Multimedia-Center at the TU Berlin 2005 – 2007 Juniorprofessor „New Media in Mathematics & Sciences“ & Director of the Multimedia-center MuLF, TU Berlin 2007 – 2009 Univ.-Professor, Institute for IT Service Technologies (IITS) & Director of the Computer Center (RUS), Department of Electrical Engineering, University of Stuttgart since 06/2009 Univ.-Professor, Head of the Institute Cluster IMA/ZLW & IfU, Department of Mechanical Engineering, RWTH Aachen University since 10/2011 Vice Dean of the Department of Mechanical Engineering, RWTH Aachen University since 03/2012 Chairwoman VDI Aachen since 05/2015 Supervisory Board of Körber AG, Hamburg

11.05.2016 S. Jeschke References 40

[Bloomberg, 2016] Why 2015 Was a Breakthrough Year in Artificial Intelligence, http://www.bloomberg.com/news/articles/2015-12-08/why-2015-was-a- breakthrough-year-in-artificial-intelligence, last visited 18.02.2016 [Cognitive Labs, 2016] http://cognitivlabs.com/the-age-of-neural-networks/, last visited 18.02.2016 [Hassabis, 2016] AlphaGo: using machine learning to master the ancient game of Go, https://googleblog.blogspot.de/2016/01/alphago-machine-learning- game-go.html, last visited 18.02.2016 [Intelligent Autonomous Systems, 2015]Collaborative assembly with phase estimation (ISRR 2015), https://www.youtube.com/watch?v=4qDFv02xlNo, last visited 18.02.2016 [Minh, 2015] Mnih, Volodymyr; et al. (2015). “Human-level control through deep reinforcement learning”, 518: 529–533. [MiorSoft (reexre), 2014] Neuroevolution - Car learns to drive, https://www.youtube.com/watch?v=5lJuEW-5vr8, last visited 18.02.2016 [nature, 2015] http://www.nature.com/nature/journal/v518/n7540/fig_tab/nature14236_SV1.html, last visited 18.02.2016 [Ruiz, 2014] Ruiz, Paula Andrea Rotes; et al. (2014). “An Interactive Approach for the Post-processing in a KDD Process”. In: Advances in Production Management Systems. Innovative and Knowledge-Based Production Management in a Global-Local World”, pp 93-100 [SethBling, 2015] MarI/O - Machine Learning for Video Games, https://www.youtube.com/watch?v=qv6UVOQ0F44, last visited 18.02.2016 [Stanley, 2002] Stanley, Kenneth O. and Miikkulainen, Risto (2002). “Evolving Neural Networks through Augmenting Topologies”. In: Evolutionary Computation: 10(2) 2002, pp 99–127. [TU Delft, 2012] TU Delft robot Leo learns to walk, https://www.youtube.com/watch?v=SBf5-eF-EIw, last visited 18.02.2016 [UC Berkeley, 2015] BRETT the Robot learns to put things together on his own, https://www.youtube.com/watch?v=JeVppkoloXs, last visited 18.02.2016 [Lee et.al. 2011) Unsupervised Learning of Hierarchical Representations with Convolutional Deep Belief Networks, http://www.cs.princeton.edu/~rajeshr/papers/cacm2011-researchHighlights-convDBN.pdf ; last visited 23.02.2016, doi :10.1145/2001269.2001295 [economist 2016] Technology Quarterly: After Moore’s Law, http://www.economist.com/technology-quarterly/2016-03-12/after-moores-law; last visited 06.04.2016

11.05.2016 S. Jeschke