Neuromorphic Computing in Ginzburg-Landau Lattice Systems
Total Page:16
File Type:pdf, Size:1020Kb
Neuromorphic computing in Ginzburg-Landau polariton lattice systems Andrzej Opala and Michał Matuszewski Institute of Physics, Polish Academy of Sciences, Warsaw Sanjib Ghosh and Timothy C. H. Liew Division of Physics and Applied Physics, Nanyang Technological University, Singapore < Andrzej Opala and Michał Matuszewski Institute of Physics, Polish Academy of Sciences, Warsaw Sanjib Ghosh and Timothy C. H. Liew Division of Physics and Applied Physics, Nanyang Technological University, Singapore < Machine learning with neural networks What is machine learning? Modern computers are extremely efficient in solving many problems, but some tasks are very hard to implement. ● It is difficult to write an algorithm that would recognize an object from different viewpoints or in di erent sceneries, lighting conditions etc. ● It is difficult to detect a fradulent credit card transaction. !e need to combine a large number of weak rules with complex dependencies rather than follow simple and reliable rules. "hese tasks are often relatively easy for humans. Machine learning with neural networks Machine learning algorithm is not written to solve a speci#c task, but to learn how to classify / detect / predict on its own. ● A large number of examples is collected and used as an input for the algorithm during the teaching phase. ● "he algorithm produces a program &eg. a neural network with #ne tuned weights), which is able to do the job not only on the provided samples, but also on data that it has never seen before. It usually contains a great number of parameters. Machine learning with neural networks "hree main types of machine learning: 1.Supervised learning – direct feedback, input data is provided with clearly de#ned labels. 2.Unsupervised learning ) no labels or feedback, machine tries to understand $ classify data on its own. 3.-einforcement learning – machine learns how to behave in an environment to maximize rewards, which may be delayed in time. Machine learning with neural networks Y. LeCun, Y. Bengio, and G. Hinton, Nature 521, 436 (2015). Machine learning with neural networks Modi#ed .ational Institute of /tandards andTechnology &MNIS"' dataset: A fruit 0y of machine learning 1est networks achieve 20.,4 error rate http:$$yann.lecun.com/exdb/mnist/ Machine learning with neural networks .onlinear transformation 5imensional expansion Y. LeCun, Y. Bengio, and G. Hinton, Nature 521, 436 (2015); David Verstraeten, PhD thesis Machine learning with neural networks .onlinear transformation 5imensional expansion Machine learning with neural networks 1ackpropagation algorithm ) an efficient method to teach deep &multilayer) neural networks Recurrent neural networks -ecurrent networks can handle time se6uences, while feed7forward networks are static https://towardsdatascience.com/recurrent-neural-networks-and-lstm-4b601dd822a5 Recurrent neural networks DeepLearning.TV Recurrent neural networks DeepLearning.TV Recurrent neural networks Kelvin Xu et al., PMLR 37:2048-2057, 2015. http://arxiv.org/abs/1502.03044 (2015) Recurrent neural networks "ransformation of a recurrent network into a feedforward network After ufolding, recurrent network can be taught using the backpropagation algorithm. Neuromorphic computing .euromorphic computing is the attempt to adjust the architecture of the physical system to the architecture of the neural network Implementation Model von .eumann architecture P. A. Merolla et al., Science 345, 668 (2014) Neuromorphic computing ● .euromorphic engineering will be probably necessary to effectively transfer ML from data centers to end user devices &eg. smartphones) ● It can mimic neurobiological systems and contributes to their understanding IBM 8eidelberg Manchester /tanford ● "he 8uman 1rain Project &:+' ● 1-AI. initiative &+/' Reservoir computing Reservoir computing "unable ● A simple way to quickly teach a network is weights to tune only the output weights, while keeping the other connections random and #xed ● "eaching is reduced to a simple linear regression. ● -andom expansion of the input in the early layers helps signi#cantly Fixed weights Reservoir computing ● "he idea works well in recurrent networks ● "he random #xed weights in the =reservoir” must be chosen carefully so that the activation does not die out or get ampli#ed exponentially &echo state property' Reservoir computing ● -eservoir networks work very well with time7series, especially one7dimensional ● "hey can be trained very quickly, but require more nodes than a fully tunable recurrent neural network for the same task Reservoir computing %s the weights in the reservoir are not tuned, this scheme is well suited for hardware implementations in many systems Brunner et al, Nat. Commun. 4, 1364 (2013); C. Du et al., Nat. Commun. 8, 2204 (2017) L. Appeltant et al., Nat. Commun. 2, 468 (2011); K. Vandoorne et al., Nat. Commun. 5, 3541 (2014) K. Nakajima et al., Sci. Rep. 5, 10487 (2015) Reservoir computing with polariton lattices Properties of exciton-polariton condensates X X γ :xcellent transport properties and extremely low e ective mass thanks to the photonic component ?ery strong interparticle interactions thanks to the exciton component ) world record of (ultrafast) optical nonlinearity /hort lifetime &@7*33 ps) is an issue, but also interesting for fundamental research of none6uilibrium systems Polariton lattices Lattices of microcavity polariton micropillars can be fabricated with extreme precision C2N Polariton Quantum Fluids group C. E. Whittaker et al, S. Klembt et al., Nature 562, 552 (2018) PRL 120, 097401 (2018) Polariton lattices M Milicevic et al., 2D Mater. 2 (2015) 034012 Reservoir computing with polaritons A Reservoir computing with polaritons 5iscrete Bomplex Cinzburg-Landau equation -esonant injection .. Boupling 1alance between nonresonant .onlin. losses Interactions pumping and linear losses Reservoir computing with polaritons Random matrix ● /ystem is excited with resonant lasers &input) and nonresonant background pump 9 ● %t the end of evolution, density in each node is recorded and used for prediction Reservoir computing with polaritons Random matrix ● Fixed couplings within the reservoir are chosen as random, realistic values ● /upervised teaching consists of tuning the output weights to minimize the cost &error) function Reservoir computing with polaritons First test( Mackey7Class equation ) nonlinear, time7dependent process "he system is taught to predict future state of a nonlinear process w$memory Reservoir computing with polaritons First test( Mackey7Class equation ) nonlinear, time7dependent process "he system is taught to predict future state of a nonlinear process w$memory "arget vs. prediction of a polariton reservoir Reservoir computing with polaritons A more ambitious task( MNI/" dataset Reservoir computing with polaritons A more ambitious task( MNI/" dataset Reservoir computing with polaritons 9olariton reservoir works optimally at the threshold of condensation &polariton lasing' Reservoir computing with polaritons 9erformance versus size of the polariton lattice Dptimal error rate is comparable to a fully tunable network with a single layer of ,30 hidden nodes Speech recognition, TI 46 set Estimate of efficiency Conclusions ● -eservoir computing is a neural network architecture which can be implemented in various physical systems ● !e demonstrated that systems described by the complex Cinzburg–Landau equation can be used for machine learning applications ● :xciton-polariton lattices may bene#t from extremely fast timescales of dynamics and the precision of lattice fabrication arXiv:1808.05135 To be published in Phys. Rev. Applied < .