Chapter Three: Google Technology

Total Page:16

File Type:pdf, Size:1020Kb

Chapter Three: Google Technology Chapter Three: Google Technology Chapter Three: Google Technology “Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to product better search results.... Fast crawling technology is needed to gather the Web documents and keep them up to date. Storage space must be used efficiently to store indices and, optionally, the documents themselves. The indexing system must process hundreds of gigabytes of data efficiently. Queries must be handled quickly, at the rate of hundreds to thousands per second.” – Sergey Brin and Lawrence Page, 19971 In the beginning, there was BackRub, the service that became Google. Today, Google is most closely associated with its PageRank algorithm. PageRank is a voting algorithm weighted for importance. The indicators of a Web page’s importance is the number of pages that link to a particular page. Messrs. Brin and Page soon added another factor which voted for the importance of a Web page. This idea was the number of people who click on a Web page. The more clicks on a Web page, the more weight that Web page was given. Over time, still other factors have been added to the PageRank algorithm; for example, the frequency with which content on a page is changed. Google’s PageRank technology is closely allied with Internet search. Voting algorithms are less effective in enterprise search, for instance. The attention given to Google and its search technology dominate popular thinking about the company. Google search is like a nova. The 1. From “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” www.- db.standord.edu/~backrub/google.html The Google Legacy 55 Chapter Three: Google Technology luminescence makes it difficult for the observer to see other aspects of the phenomenon clearly or easily. Radiance aside, Google is a technology company.2 Some of that technology when described in technical papers such as the earliest one “The Anatomy of a Large-Scale Hypertextual Web Search Engine” is demanding. The later papers such as “MapReduce: Simplified Data Processing on Large Clusters” can be a slow read.3 Since Google is technology, explaining what Google does in an easily-digestible meal is difficult. The diagram below provides unauthorized snapshot of Google’s computing framework. b a d c Important Google technologies that underlie this diagram of the Googleplex include: [a] modifications to Linux to permit large file sizes and other functions so as to accelerate the overall system; [b] a distributed architecture that allows applications and scaling to be “plugged in” without the type of hands-on set-up other operating systems require; [c] a technical architecture that is similar at every level of scale; [d] a Web-centric architecture that allows new types of applications to be built without a programming language limitation. 2. The annex to this monograph contains a listing of more than 60 Google patents. The list is not all-inclusive; however, it does provide the patent number and a brief description for some of Google’s most important patents. The PageRank patent belongs to the trustees of Stanford University. Google’s patent efforts have focused on systems and methods for relevance, advertising, and other core foci of the company. Google is creating a patent fence to protect its interests. 3. Jeff Dean, former Alta Vista researcher and a Google senior engineer, has been an advocate of MapReduce. His most recent papers are available on his Web page at http:// labs.google.com/people/jeff/. 56 The Google Legacy Chapter Three: Google Technology Google’s technology has emerged from a series of continuous improvements or what Japanese management consultants call kaizan. Each Google technical change may be inconsequential to the average user of Google. But when taken as a whole, Google’s “technological advantage” comes from Google’s incremental innovations, clever adaptations of research-computing concepts, and Byzantine tweaks to Linux. Some day, a historian of technology will be able to identify, from the hundreds of improvements that Google has engineered in the last nine years, one or two that stand with PageRank as of major importance. Critics of Google will see that the company has grafted to its core technology processes from many different sources. To illustrate, the structure of Google’s data centers and the messages passed to and from these data centers is in many ways a variant of grid computing.4 Google’s ability to read data from many computers simultaneously is reminiscent of BitTorrent’s technology.5 Google’s use of commodity or “white box” hardware in its data centers is an indication of Google’s hacker ethos. The use of memory and discs to store multiple copies of data comes from the frontiers of computing. Google’s approach to technology, then, is eclectic and in many ways represents a building block approach to large-scale systems. Google benefits from that eclecticism in several ways. First, Google’s computational framework delivers sizzling performance from low-cost hardware. Second, Google worked around the bottlenecks of such operating systems as Solaris, Windows Advanced Server, and off-the-shelf Linux. Third, Google took good programming ideas from other languages, implementing new functions and libraries to eliminate most of the manual coding required to parallelise an application across Google’s servers.6 According to Jeff Dean, one of Google’s senior engineers, “Google engineering is sort of chaotic.”7 This is neither surprising nor necessarily a negative. The Googleplex is a toy box for engineers and programmers. The tools are sophisticated. The challenges of the problems and peers make Google “the place to be” for the best and brightest technical talent in the world. The nature of creativity combined with Google’s approach to innovation make it difficult to predict the next big thing from Google. Before reviewing selected parts of Google’s technology in somewhat more detail, the diagram “Google’s Computing Framework” provides an overview of the Googleplex and some of its technologies. These will be touched upon in this section. 4. Grid computing is applying resources from many computers in a network to a single problem or application. Google uses grid-like technology in its distributed computing system. 5. BitTorrent is a peer-to-peer file distribution tool written by programmer Bram Cohen in 2001.The reference implementation is written in Python and is released under the MIT License. 6. Google has anywhere from 100,000 to 165,000 or more servers. Servers are organized into clusters. Clusters may reside within one rack or across multiple racks of servers. Some Google functions are distributed across data centers. 7. From Dr Dean’s speech at the University of Washington in October 2003. See http:// www.uwtv.org/programs/displayevent.asp?rid=2459. The Google Legacy 57 Chapter Three: Google Technology PageRank requires a lot of computing horsepower cycles to work. When Google got underway in 1996, Messrs. Brin and Page had limited computing horsepower. In order to make PageRank work, they had to figure out how to get the PageRank algorithm to run on garden-variety computers available to them. From the beginning – and this is an important issue with regards to Google’s almost-certain collision course with Microsoft – Google had to solve both software engineering and hardware engineering issues to make Google Search viable. In fact, when discussing Google technology, it is important to keep in mind that PageRank is important only because it can run quickly in the real world, not in a sterile computer lab illuminated with the blue glow of supercomputers. The figure Google’s Fusion: Hardware and Software Engineering shows that Google’s technology framework has two areas of activity. There is the software engineering effort that focuses on PageRank and other applications. Software engineering, as used here, means writing code and thinking about how computer systems operate in order to get work done quickly. Quickly means the sub one-second response times that Google is able to maintain despite its surging growth in usage, applications and data processing. Google’s Fusion: Hardware and Software Innovations The Google phenomenon comes from the fission occurring when PageRank’s software and hardware engineering interact. Google’s technology delivers super computer applications for mass markets. The other effort focuses on hardware. Google has refined server racks, cable placement, cooling devices, and data center layout. The payoff is lower operating costs and the ability to scale as demand for computing resources increases. With faster turnaround and the 58 The Google Legacy Chapter Three: Google Technology elimination of such troublesome jobs as backing up data, Google’s hardware innovations give it a competitive advantage few of its rivals can equal as of mid-2005. PageRank with its layering of additional computations added over the years is a software problem of considerable difficulty. The Google system must find Web pages and perform dozens, if not hundreds of analyses of those Web pages. Consider the links pointing to a Web page. Google must keep track of them for more than eight billion Web pages. For a single Web page with one link pointing to it, the problem is trivial. One link equals one pointer. But what happens when a site has 10,000 links pointing to it? The problem becomes many times larger and more computationally demanding. Some of these links are likely to come from sites that have more traffic than others. Some of the links may come from sites that have spoofed Google for fun or profit. The calculations to sort out the “value” of each of these links adds to computational work associated with PageRank.
Recommended publications
  • “Where the Internet Lives” Data Centers As Cloud Infrastructure
    C HAPTER 3 “Where the Internet Lives” Data Centers as Cloud Infrastructure JN OEN IFER H LT AND PATRICK VONDERAU mblazoned with the headline “Transparency,” Google released dozens of Einterior and exterior glossy images of their data centers on the company’s website in 2012. Inviting the public to “come inside” and “see where the In- ternet lives,” Google proudly announced they would reveal “what we’re made of— inside and out” by offering virtual tours through photo galleries of the tech- nology, the people, and the places making up their data centers.1 Google’s tours showed the world a glimpse of these structures with a series of photographs showcasing “the physical Internet,” as the site characterized it. The pictures consisted mainly of slick, artful images of buildings, wires, pipes, servers, and dedicated workers who populate the centers. Apple has also put the infrastructure behind its cloud services on display for the digital audience by featuring a host of infographics, statistics, and polished inside views of the company’s “environmentally responsible” data center fa- cilities on its website.2 Facebook, in turn, features extensive photo and news coverage of its global physical infrastructure on dedicated Facebook pages, while Microsoft presents guided video tours of their server farms for free download on its corporate website.3 Even smaller data centers like those owned by Eu- ropean Internet service provider Bahnhof AB, located in Sweden, are increas- ingly on digital exhibit, with their corporate parents offering various images of server racks, cooling and power technology, or even their meeting rooms, Copyright © ${Date}.
    [Show full text]
  • Google's Hyperscale Data Centres and Infrastructure Ecosystem in Europe
    GOOGLE’S HYPERSCALE DATA CENTRES AND INFRASTRUCTURE ECOSYSTEM IN EUROPE Economic impact study CLIENT: GOOGLE SEPTEMBER 2019 AUTHORS Dr Bruno Basalisco, Managing economist, Head of Digital Economy service Martin Bo Westh Hansen, Managing economist, Head of Energy & Climate service Tuomas Haanperä, Managing economist Erik Dahlberg, Senior economist Morten May Hansen, Economist Joshua Brown, Analyst Laurids Leo Münier, Analyst 0 Malthe Faber Laursen, Analyst Helge Sigurd Næss-Schmidt, Partner EXECUTIVE SUMMARY Digital transformation is a defining challenge and opportunity for the European economy, provid- ing the means to reinvent and improve how firms, consumers, governments, and citizens interact and do business with each other. European consumers, firms, and society stand to benefit from the resulting innovative products, processes, services, and business models – contributing to EU productivity. To maximise these benefits, it is key that the private and public sector can rely on an advanced and efficient cloud value chain. Considerable literature exists on cloud solutions and their transformative impact across the econ- omy. This report contributes by focusing on the analysis of the cloud value chain, taking Google as a relevant case study. It assesses the economic impact of the Google European hyperscale data cen- tres and related infrastructures which, behind the scenes, underpin online services such as cloud solutions. Thus, the report is an applied analysis of these infrastructure layers “above the cloud” (upstream inputs to deliver cloud solutions) and quantifies Google’s European economic contribu- tion associated with these activities. Double-clicking the cloud Services, such as Google Cloud, are a key example of a set of solutions that can serve a variety of Eu- ropean business needs and thus support economic productivity.
    [Show full text]
  • The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design
    The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design Jeffrey Dean Google Research [email protected] Abstract The past decade has seen a remarkable series of advances in machine learning, and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas, including computer vision, speech recognition, language translation, and natural language understanding tasks. This paper is a companion paper to a keynote talk at the 2020 International Solid-State Circuits Conference (ISSCC) discussing some of the advances in machine learning, and their implications on the kinds of computational devices we need to build, especially in the post-Moore’s Law-era. It also discusses some of the ways that machine learning may also be able to help with some aspects of the circuit design process. Finally, it provides a sketch of at least one interesting direction towards much larger-scale multi-task models that are sparsely activated and employ much more dynamic, example- and task-based routing than the machine learning models of today. Introduction The past decade has seen a remarkable series of advances in machine learning (ML), and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas [LeCun et al. 2015]. Major areas of significant advances ​ ​ include computer vision [Krizhevsky et al. 2012, Szegedy et al. 2015, He et al. 2016, Real et al. 2017, Tan ​ ​ ​ ​ ​ ​ and Le 2019], speech recognition [Hinton et al.
    [Show full text]
  • Data Centers
    REPORT HIGHLIGHTS Technological innovations are rapidly changing our lives, our Heat sensing drones deployed after natural disasters to locate businesses, and our economy. Technology, no longer an isolated survivors and deliver lifesaving equipment can arrive at the scene business sector, is a facilitator enabling innovation, growth, and faster than first responders. Wearable technologies that we sport the strengthening of America’s traditional business sectors. From help us lead healthier lifestyles. Distance learning courses empower transportation and energy to finance and medicine, businesses children and adults to learn new skills or trades to keep up with rely on technology to interact with their customers, improve their the constantly evolving job market. Innovations in science, energy, services, and make their operations more globally competitive. manufacturing, health care, education, transportation and many Innovative technology is deeply integrated into the economy and other fields—and their jobs—are being powered by data centers. is the driving force behind the creation of new jobs in science, health care, education, transportation, and more. Technology has But the benefits of data centers go beyond powering America’s fundamentally transformed our economy—and is poised to fuel even cutting-edge innovations. The economic impact, direct and indirect, more growth in the future. is substantial. Overall, there were 6 million jobs in the U.S. technology industry last While being built, a typical data center employs 1,688 local workers, year, and we expect this to increase by 4.1% in 2017. Technology- provides $77.7 million in wages for those workers, produces $243.5 related jobs run the gamut—from transportation logistics and million in output along the local economy’s supply chain, and warehousing to programmers and radiologists.
    [Show full text]
  • Mapreduce Programming Oct 30, 2012
    15-440" MapReduce Programming Oct 30, 2012" Topics" n" Large-scale computing" l" Traditional high-performance computing (HPC)! l" Cluster computing! n" MapReduce" l" Definition! l" Examples! n" Implementation" n" Alternatives to MapReduce" n" Properties" – 1 –! Typical HPC Machine" Compute Nodes" Compute Nodes n" High end processor(s)" CPU CPU CPU • • • n" Lots of RAM" Mem Mem Mem Network" n" Specialized" Network n" Very high performance" Storage Server" • • • n" RAID-based disk array" Storage Server – 2 –! HPC Machine Example" Jaguar Supercomputer" n" 3rd fastest in world" Compute Nodes" n" 18,688 nodes in largest partition" n" 2X 2.6Ghz 6-core AMD Opteron" n" 16GB memory" n" Total: 2.3 petaflop / 300 TB memory" Network" n" 3D torus" l" Each node connected to 6 neighbors via 6.0 GB/s links! Storage Server" n" 10PB RAID-based disk array" – 3 –! HPC Programming Model" Application Programs Software Packages Machine-Dependent Programming Model Hardware n" Programs described at very low level" l" Specify detailed control of processing & communications! n" Rely on small number of software packages" l" Written by specialists! l" Limits classes of problems & solution methods! – 4 –! Bulk Synchronous Programming" Solving Problem Over Grid" n" E.g., finite-element computation" Partition into Regions" n" p regions for p processors" Map Region per Processor" n" Local computation sequential" n" Periodically communicate boundary values with neighbors" – 5 –! Typical HPC Operation" Characteristics" n" Long-lived processes" Message Passing" n" Make use
    [Show full text]
  • Mashing-Up Maps Google Geo Services and the Geography Of
    Mashing-up Maps Google Geo Services and the Geography of Ubiquity Craig M. Dalton A dissertation submitted to the faculty of the University of North Carolina at Chapel Hill in partial fulfillment of the requirements for the degree of Doctor in Philosophy in the Department of Geography. Chapel Hill 2012 Approved by: Dr. Scott Kirsch Dr. Banu Gokariksel Dr. Kenneth Hillis Dr. John Pickles Dr. Sarah Sharma © 2012 Craig M. Dalton ALL RIGHTS RESERVED ii Abstract CRAIG DALTON: Mashing-up Maps: Google Geo Services and the Geography of Ubiquity (Under the direction of Scott Kirsch) How are Google geo services such as Google Maps and Google Earth shaping ways of seeing the world? These geographic ways of seeing are part of an influential and problematic geographic discourse. This discourse reaches hundreds of millions of people, though not all have equal standing. It empowers many people to make maps on the geoweb, but within the limits of Google’s business strategy. These qualities, set against the state-centeredness of mapmaking over the last six hundred years, mark the Google geo discourse as something noteworthy, a consumer-centered mapping in a popular geographic discourse. This dissertation examines the Google geo discourse through its social and technological history, Google’s role in producing and limiting the discourse, and the subjects who make and use these maps. iii Acknowledgements This dissertation was only possible with the help of a large number of people. I owe each a debt of gratitude. Chief among them is a fantastic advisor, Scott Kirsch. His patience, grace, and good criticism saw me through the trials of graduate school.
    [Show full text]
  • Envisioning the Cloud: the Next Computing Paradigm
    Marketspace® Point of View Envisioning the Cloud: The Next Computing Paradigm March 20, 2009 Jeffrey F. Rayport Andrew Heyward Research Associates Raj Beri Jake Samuelson Geordie McClelland Funding for this paper was provided by Google, Inc Table of Contents Executive Summary .................................................................................... i Introduction................................................................................................ 1 Understanding the cloud...................................................................... 3 The network is the (very big, very powerful) computer .................... 6 How the data center became possible ............................................. 8 The passengers are driving the bus ...................................................11 Where we go from here......................................................................12 Benefits and Opportunities in the Cloud ..............................................14 Anywhere/anytime access to software............................................14 Specialization and customization of applications...........................17 Collaboration .......................................................................................20 Processing power on demand...........................................................21 Storage as a universal service............................................................23 Cost savings in the cloud ....................................................................24 Enabling the Cloud
    [Show full text]
  • Alphabet 2020
    Alphabet Alphabet 2020 Annual 2020 Report 2020 Rev2_210419_YIR_Cover.indd 1-3 4/19/21 7:02 PM Alphabet Year in Review 2020 210414_YIR_Design.indd 1 4/15/21 3:57 PM From our CEO 2 Year in Review 210414_YIR_Design.indd 2 4/15/21 3:57 PM To our investors, You might expect a company’s year in review to open with big numbers: how many products we launched, how many consumers and businesses adopted those products, and how much revenue we generated in the process. And, yes, you will see some big numbers shared in the pages of this report and in future earnings calls as well, but 22-plus years in, Google is still not a conventional company (and we don’t intend to become one). And 2020 was anything but a conventional year. That’s why over the past 12 months we’ve measured our success by the people we’ve helped in moments that matter. Our success is in the researchers who used our technology to fight the spread of the coronavirus. It’s in job seekers like Rey Justo, who, after being laid off during the pandemic, earned a Google Career Certificate online and was hired into a great new career. And our success is in all the small businesses who used Google products to continue serving customers and keeping employees on payroll … in the students who kept learning virtually on Google Classroom … and in the grandparents who read bedtime stories to grandchildren from thousands of miles away over Google Meet. We’ve always believed that we only succeed when others do.
    [Show full text]
  • The Pagerank Algorithm and Application on Searching of Academic Papers
    The PageRank algorithm and application on searching of academic papers Ping Yeh Google, Inc. 2009/12/9 Department of Physics, NTU Disclaimer (legal) The content of this talk is the speaker's personal opinion and is not the opinion or policy of his employer. Disclaimer (content) You will not hear physics. You will not see differential equations. You will: ● get a review of PageRank, the algorithm used in Google's web search. It has been applied to evaluate journal status and influence of nodes in a graph by researchers, ● see some linear algebra and Markov chains associated with it, and ● see some results of applying it to journal status. Outline Introduction Google and Google search PageRank algorithm for ranking web pages Using MapReduce to calculate PageRank for billions of pages Impact factor of journals and PageRank Conclusion Google The name: homophone to the word “Googol” which means 10100. The company: ● founded by Larry Page and Sergey Brin in 1998, ● ~20,000 employees as of 2009, ● spread in 68 offices around the world (23 in N. America, 3 in Latin America, 14 in Asia Pacific, 23 in Europe, 5 in Middle East and Africa). The mission: “to organize the world's information and make it universally accessible and useful.” Google Services Sky YouTube iGoogle web search talk book search Chrome calendar scholar translate blogger.com Android product news search maps picasaweb video groups Gmail desktop reader Earth Photo by mr.hero on panoramio (http://www.panoramio.com/photo/1127015)‏ 6 Google Search http://www.google.com/ or http://www.google.com.tw/ The abundance problem Quote Langville and Meyer's nice book “Google's PageRank and beyond: the science of search engine rankings”: The men in Jorge Luis Borges’ 1941 short story, “The Library of Babel”, which describes an imaginary, infinite library.
    [Show full text]
  • In the United States District Court for the Eastern District of Texas Marshall Division
    Case 2:18-cv-00549 Document 1 Filed 12/30/18 Page 1 of 40 PageID #: 1 IN THE UNITED STATES DISTRICT COURT FOR THE EASTERN DISTRICT OF TEXAS MARSHALL DIVISION UNILOC 2017 LLC § Plaintiff, § CIVIL ACTION NO. 2:18-cv-00549 § v. § § PATENT CASE GOOGLE LLC, § § Defendant. § JURY TRIAL DEMANDED § ORIGINAL COMPLAINT FOR PATENT INFRINGEMENT Plaintiff Uniloc 2017 LLC (“Uniloc”), as and for their complaint against defendant Google LLC (“Google”) allege as follows: THE PARTIES 1. Uniloc is a Delaware limited liability company having places of business at 620 Newport Center Drive, Newport Beach, California 92660 and 102 N. College Avenue, Suite 303, Tyler, Texas 75702. 2. Uniloc holds all substantial rights, title and interest in and to the asserted patent. 3. On information and belief, Google, a Delaware corporation with its principal office at 1600 Amphitheatre Parkway, Mountain View, CA 94043. Google offers its products and/or services, including those accused herein of infringement, to customers and potential customers located in Texas and in the judicial Eastern District of Texas. JURISDICTION 4. Uniloc brings this action for patent infringement under the patent laws of the United States, 35 U.S.C. § 271 et seq. This Court has subject matter jurisdiction pursuant to 28 U.S.C. §§ 1331 and 1338(a). Page 1 of 40 Case 2:18-cv-00549 Document 1 Filed 12/30/18 Page 2 of 40 PageID #: 2 5. This Court has personal jurisdiction over Google in this action because Google has committed acts within the Eastern District of Texas giving rise to this action and has established minimum contacts with this forum such that the exercise of jurisdiction over Google would not offend traditional notions of fair play and substantial justice.
    [Show full text]
  • Universidade De Brasília Instituto De Ciências Sociais Departamento De Sociologia
    UNIVERSIDADE DE BRASÍLIA INSTITUTO DE CIÊNCIAS SOCIAIS DEPARTAMENTO DE SOCIOLOGIA TECNOLOGIA, INFORMAÇÃO E PODER Das plataformas online aos monopólios digitais Autor: Jonas Valente Brasília, 2019 UNIVERSIDADE DE BRASÍLIA INSTITUTO DE CIÊNCIAS SOCIAIS DEPARTAMENTO DE SOCIOLOGIA TECNOLOGIA, INFORMAÇÃO E PODER Das plataformas online aos monopólios digitais Autor: Jonas Valente Tese apresentada ao Departamento de Sociologia da Universidade de Brasília/UnB como parte dos requisitos para a obtenção do título de Doutor. Brasília, 2019 UNIVERSIDADE DE BRASÍLIA INSTITUTO DE CIÊNCIAS SOCIAIS DEPARTAMENTO DE SOCIOLOGIA PROGRAMA DE PÓS-GRADUAÇÃO EM SOCIOLOGIA TESE DE DOUTORADO TECNOLOGIA, INFORMAÇÃO E PODER Das plataformas online aos monopólios digitais Autor: Jonas Valente Orientador: Michelangelo Giotto Trigueiro (UnB) Banca: Prof. Doutor Michelangelo Giotto Trigueiro (UnB) Prof. Doutora Fernanda Sobral (UnB) Prof. Doutor César Ricardo Siqueira Bolaño (UFS) Prof. Doutora Christiana Freitas (UnB) AGRADECIMENTOS A presente tese foi concluída em meio a um turbulento ambiente político, com ameaças de desmonte das universidades públicas e da área onde a presente investigação está inserida, a sociologia. Meu primeiro agradecimento é a este Departamento e à Universidade de Brasília, que tanto contribuem para a reflexão crítica sobre os rumos do Brasil. Agradeço com carinho especial à Raisa, pelo companheirismo na vida, pelos aprendizados, por tentar tornar este penoso processo mais leve, pela paciência nesta reta final de elaboração da tese, por toda a ajuda na leitura do texto e pelo apoio na conclusão do trabalho. Um agradecimento ao orientador, Michelangelo Trigueiro, pelas avaliações, conselhos e pela gentileza de ter continuado, mesmo após sua aposentadoria. Destaco também a importância do professor Francisco Louçã pela recepção no Instituto Superior de Economia e Gestão da Universidade de Lisboa durante o doutorado sanduíche.
    [Show full text]
  • Summer 2017 Issue 11.1
    the THE MAGAZINE OF CARNEGIE MELLON UNIVERSITY’S SCHOOL OF COMPUTER SCIENCE 60 YEARS IN THE MAKING CMU AI is Here SUMMER 2017 ISSUE 11.1 SUMMER 2017 cvr1 Iain Mathews Bhat, Matthews Win Academy Awards for Technical Achievement Computer Science at CMU School of Computer Science alumnus Kiran Bhat and underpins divergent fields and endeavors in today’s world, former Robotics Institute faculty member Iain Matthews all of which LINK SCS to profound received Oscars on February 11, from the Academy of advances in art, culture, nature, Motion Picture Arts and Science, for their work in capturing the sciences and beyond. facial performances. Bhat earned his doctorate in robotics in 2004, and helped design and develop the Industrial Light and Magic facial performance-capture solving system, which transfers facial performances from actors to digital characters in large-scale productions. The system was used in “Rogue One: A Star Wars Story” to resurrect the role of Grand Moff Tarkin, played by the late actor Peter Cushing, as well as to capture Mark Ruffalo’s expressions for his character, the Hulk, in “The Avengers.” Matthews, a post-doctoral researcher and former faculty member in the Robotics Institute working on face modeling and vision-based tracking, was recognized along with his team for the design, engineering and development of the facial-performance capture and solving system at Weta Digital, known as FACETS. Matthews spent two years helping to develop the facial motion capture system for “Avatar” and “Tintin.” With Bhat’s and Matthews’ wins, Carnegie Mellon alumni and faculty have received nine Academy Awards to date.
    [Show full text]