The Power of Algorithms:Ч

The Power of Algorithms:Ч

О Г О К С В Е Ш Ы Н Р Е Ч . Г . Н И Н Е М И Т Е Т И С Р А.И. Матяшевская, Е.В. Тиден Е В И Н У Й Ы Н Н Е В THE POWER OF ALGORITHMS: Т С Р А Д У С О Г Й И К С В О Т А Р part 3 А С Учебное пособие Саратов 2019 1 О Г О К С В Е Ш Ы Н Р Е Ч . Г . Н И Н Е М И Т Е Т И С Р Е В И Н У Й Ы Составители - А.И. Матяшевская, Е.В. Тиден Н The power of algorithms Н иностранному языку Е В Т Матяшевская, Е.В. Тиден. С Р А Д У С О Г Й И для: partстудентов /Сост. К С В О — Саратов, 2019. 3: — Т А Р Учебное пособие по А С Кандидат философских наук Шилова С.А. Рецензент: 83 с. А.И. 2 О Г О К С В Е Ш Ы Н Р Е Ч . Г . Н И Н Е М И Т Е Т И С Р Е Preface.................................................................................................................4 В И New theory cracks open the black box of beep learning Н У Й Artificial intelligence shows why atheism is unpopular Table of Contents Ы Н The future of online dating Н Е В Se Т С duction, Inc.......................... Р А Supplementary reading Д У С О Г Й И К …........................................................................27 С В О ......................................................................................48 Т .......................................................................36 А Р А .................................5 С .................................17 3 О Г О К С В Е Ш Ы Н Р Е Ч . Г . Н И Н Е М И Т Е Т 2019гг.) учебно-познавательнойНастоящее учебное пособие тематики включает для актуальныестудентов механико-тексты (2018- И С математического факультета (направления 02.03.01 «Математика и Р Е компьютерные науки», 01.03.02 «Прикладная математика и В И Н информатика», 38.03.05 «Бизнес-информатика»). Целью данного У пособия является формирование навыка чтения и перевода научно- Й Ы популярных текстов, а также развитие устной речи студентовPREFACE (умение Н Н Е выразигь свою точку зрения, дать оценку обсуждаемой проблеме). В Т С Р А информационныхПособие технологийсостоит из в5 современном разделов, рассматривающих мире. Каждый из значение них Д У содержит аутентичные материалы (источники: С О Atlantic, Gizmodo, Aeon, Г Й ним. И К С В О расширения словарного запаса и дальнейшего закрепления навыков Т Раздел “Supplementary reading“ служит материалом для А Р работы с текстами по специальности. Пособие может успешно А С использоваться как для аудиторных занятий, так и для внеаудиторной практики. Vox, Logic magazine, Bloomberg Quanta Magazine, ) и упражнения к The 4 О Г О К С В Е Ш Ы Н Р Е Ч . Г . Н И Н Е М И Т Е Т Exercise И С Р Say what Russian words help to guess the meaning of the following words: Е 1. New Theory Cracks Open the Black Box В I. И algorithms Н experts, neuroscientist У Й Exercise II. Ы , video, principle, systems, operates, neurons, signals, photo, Н Н Make sure you know the following words and word combinations. Е of Deep Learning В coarse-graining, drawn-out, distinct, plausibility, feat, to glean, discrete, to Т С traverse, salient, to squeeze Р А Д У С New Theory Cracks Open the Black Box of Deep Learning О Г the puzzlingA new success idea calledof today’s the “information artificial-intelligence bottleneck” algorithms is helping — to and explain Й И might also explain how human brains learn К С В О converse, drive cars, beat video games, dream, paint pictures and help Т Even as machines known as “deep neural networks” have learned to А make scientific discoveries, they have also confounded their human Р А creators, who never expected so-called “deep-learning” algorithms to work С so well. No underlying principle has guided the design of these learning systems, other than vague inspiration drawn from the architecture of the brain (and no one really understands how that operates either). Like a brain, a deep neural network has layers of neurons — artificial ones that are figments of computer memory. When a neuron fires, it sends signals to connected neurons in the layer above. During deep learning, connections in the network are strengthened or weakened as needed to make the system 5 О Г О К С В Е Ш Ы Н Р Е Ч . Г . Н И Н Е better at sending signals from input data — the pixels of a photo of a dog, М for instance — up through the layers to neurons associated with the right И Т Е high-level concepts, such as “dog.” After a deep neural network has Т И “learned” from thousands of sample dog photos, it can identify dogs in С Р new photos as accurately as people can. The magic leap from special cases Е В И to general concepts during learning gives deep neural networks their Н У power, just as it underlies human reasoning, creativity and the other Й Ы faculties collectively termed “intelligence.” Experts wonder what it is Н Н about deep learning that enables generalization — and to what extent Е В brains apprehend reality in the same way. Naftali Tishby, a computer Т С scientist and neuroscientist, presented evidence in support of a new theory Р А Д explaining how deep learning works. Tishby argues that deep neural У С networks learn according to a procedure called the “information О Г bottleneck” . The idea is that a network rids noisy input data of extraneous Й И details as if by squeezing the information through a bottleneck, retaining К С В only the features most relevant to general concepts. Striking new computer О Т experiments reveal how this squeezing procedure happens during deep А Р learning, at least in the cases they studied. А С informationTishby’s bottleneck findings idea havecould the be AI very community important buzzing. in future “I deep believe neural that the network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said. 6 О Г О К С В Е Ш Ы Н Р Е Ч . Г . Н И Н Е Some researchers remain skeptical that the theory fully accounts for the М success of deep learning, but Kyle Cranmer, a particle physicist at New И Т Е York University who uses machine learning to analyze particle collisions Т И at the Large Hadron Collider, said that as a general principle of learning, it С Р “somehow smells right.” Geoffrey Hinton, a pioneer of deep learning who Е В И works at Google and the University of Toronto, emailed Tishby after Н У watching his Berlin talk. “It’s extremely interesting,” Hinton wrote. “I Й Ы have to listen to it another 10,000 times to really understand it, but it’s Н Н very rare nowadays to hear a talk with a really original idea in it that may Е В be the answer to a really major puzzle.” According to Tishby, who views Т С the information bottleneck as a fundamental principle behind learning, Р А Д whether you’re an algorithm or a conscious being, that long-awaited У С answer “is that the most important part of learning is actually forgetting.” О Г Tishby began contemplating the information bottleneck around the time Й И that other researchers were first mulling over deep neural networks, though К С В neither concept had been named yet. It was the 1980s, and Tishby was О Т thinking about how good humans are at speech recognition — a major А Р challenge for AI at the time. Tishby realized that the crux of the issue was А С the question of relevance: What are the most relevant features of a spoken word, and how do we tease these out from the variables that accompany them, such as accents, mumbling and intonation? In general, when we face the sea of data that is reality, which signals do we keep? “This notion of relevant information was mentioned many times in history but never formulated correctly,” Tishby said in an interview last month. “For many years people thought information theory wasn’t the right way to think 7 О Г О К С В Е Ш Ы Н Р Е Ч . Г . Н И Н Е about relevance, starting with misconceptions that go all the way to М Shannon himself.” Claude Shannon, the founder of information theory, in И Т Е a sense liberated the study of information starting in the 1940s by allowing Т И it to be considered in the abstract — as 1s and 0s with purely mathematical С Р meaning. Shannon took the view that, as Tishby put it, “information is not Е В И about semantics.” But, Tishby argued, this isn’t true. Using information Н У theory, he realized, “you can define ‘relevant’ in a precise sense.” Imagine Й Ы X is a complex data set, like the pixels of a dog photo, and Y is a simpler Н Н variable represented by those data, like the word “dog.” You can capture Е В all the “relevant” information in X about Y by compressing X as much as Т С you can without losing the ability to predict Y.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    83 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us