Insert Your Title Here

Total Page:16

File Type:pdf, Size:1020Kb

Insert Your Title Here SPAA’21 Panel Paper: Architecture-Friendly Algorithms versus Algorithm-Friendly Architectures Guy Blelloch William Dally Margaret Martonosi Computer Science NVIDIA Computer Science Carnegie Mellon University Santa Clara, CA Princeton University Pittsburgh, PA [email protected] Princeton, NJ [email protected] [email protected] Uzi Vishkin Katherine Yelick University of Maryland Institute for Electrical Engineering and Computer Sciences Advanced Computer Studies (UMIACS) University of California College Park, MD Berkeley, CA Contact author: [email protected] [email protected] ABSTRACT 1 Introduction, by Uzi Vishkin The current paper provides preliminary statements of the The panelists are: panelists ahead of a panel discussion at the ACM SPAA 2021 - Guy Blelloch, Carnegie Mellon University conference on the topic: algorithm-friendly architecture versus - Bill Dally, NVIDIA architecture-friendly algorithms. - Margaret Martonosi, Princeton University - Uzi Vishkin, University of Maryland CCS CONCEPTS - Kathy Yelick, University of California, Berkeley Architectures As Chair of the ACM SPAA Steering Committee and organizer Design and analysis of algorithms of this panel discussion, I would like to welcome our diverse Models of computation group of distinguished panelists. We are fortunate to have you. KEYWORDS We were looking for a panel discussion of a central topic to SPAA that will benefit from the diversity of our distinguished Parallel algorithms; parallel architectures panelists. We felt that the authentic diversity that each of the panelist brings is best reflected in preliminary statements, ACM Reference format: expressed prior to start sharing our thoughts, and, therefore, Guy Blelloch, William Dally, Margaret Martonosi, Uzi Vishkin and merits the separate archival space that follows. Katherine Yelick. 2021. SPAA’21 Panel Paper: Architecture-friendly algorithms versus algorithm-friendly architectures. In Proceedings of the 33rd ACM Symposium on Parallelism in Algorithms and Architectures (SPAA ’21), July 6–8, 2021, Virtual Event, USA. ACM, New York, NY, USA, 2 Preliminary statement, by Guy Blelloch 7 pages. https://doi.org/10.1145/3409964.3461780 The Random Access Machine (RAM) model has served the computing community amazingly well as a bridge from Permission to make digital or hard copies of all or part of this work for personal or algorithms, through programming languages to machines. This classroom use is granted without fee provided that copies are not made or distributed in large part can be attributed to the superhuman achievements for profit or commercial advantage and that copies bear this notice and the full by computer architects in maintaining the RAM abstraction as citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy technology has advanced. These efforts include a variety of otherwise, or republish, to post on servers or to redistribute to lists, requires prior approaches for maintaining the appearance of random access specific permission and/or a fee. Request permissions from [email protected]. memory, including sophisticated caching and paging SPAA ’21, July 6–8, 2021, Virtual Event, USA. mechanisms, as well as many developments to extract © 2021 Copyright is held by the owner/author(s). Publication rights licensed to ACM. parallelism from sequences of RAM instructions, including ACM ISBN 978-1-4503-8070-6/21/07...$15.00. https://doi.org/10.1145/3409964.3461780 pipelining, register renaming, branch prediction, prefetching, out-of-order execution, etc. Based on this effort the RAM instructions used in abstract simple extensions that support accounting for locality, as well as algorithms closely match actual machine instructions and asymmetry in read-write costs. It seems like the emphasis language structures. It is easy to understand, for example, how should be on supporting such simple models by adapting the algorithmic concept of summing the elements of a sequence hardware when possible, rather than trying to push to the can be converted to a for loop at the language level, and a common user complicated and changing models capturing the sequence of RAM instructions roughly consisting of a load, add technology de-jour. Without such simple models we will not be to a register, increment a register, compare, and conditional able to educate the vast majority of the next generation of jump. The RAM abstraction has greatly simplified developing students on how to effectively design algorithms and efficient algorithms and code for sequential machines, but more applications for future machines. importantly has allowed us to educate innumerable students in the art of algorithm design, programming language design, 2.1 Bio programming, application development. These students are now Guy Blelloch is a Professor of Computer Science at Carnegie responsible for where we have reached in the field. Mellon University. He received a BA in Physics and BS in When analyzing algorithms in the abstract RAM, in all honesty, Engineering from Swarthmore College in 1983, and a PhD in it does not tell us much about actual runtime on a particular Computer Science from MIT in 1988. He has been on the faculty machine. But that is not its goal, and instead the goal of such a at Carnegie Mellon since 1988, and served as Associate Dean of model is to lead algorithm and application developers in the Undergraduate studies from 2016-2020. right direction with a concrete and easily understandable model His research contributions have been in the interaction of that greatly narrows the search space of reasonable algorithms. practical and theoretical considerations in parallel algorithms The RAM by itself, for example, does not capture the locality and programming languages. His early work on that is needed to make effective use of caches. In many cases implementations and algorithmic applications of the scan (prefix this locality is a second order effect---for example, when sums) operation has become influential in the design of parallel comparing an exponential vs. linear time algorithm. In some algorithms for a variety of platforms. His work on the work-span cases it is a first order effect, but it is easy to add a one level (or work-depth) view for analyzing parallel algorithms has cache to the RAM model, and hundreds of algorithms have been helped develop algorithms that are both theoretically and developed in such a model. When algorithms developed in this practically efficient. His work on the Nesl programming model satisfy a property of being cache oblivious, they will also language developed the idea of program-based cost-models, and work effectively on a multilevel cache. nested-parallel programming. His work on parallel The RAM, however, is inherently sequential so unfortunately it garbage collection was the first to show bounds on both time fails when coming to parallelism. We need a replacement (or and space. His work on graph-processing frameworks, such as replacements) that supports parallelism. As with the RAM it is Ligra and GraphChi and Aspen, have set a foundation for large- extremely important that this replacement(s) be simple so that it scale parallel graph processing. His recent work on analyzing the can be used to educate every computer scientist, as well as many parallelism in incremental/iterative algorithms has opened a new others, about how to develop effective (parallel) algorithms and view to parallel algorithms—i.e., taking sequential algorithms applications. As in the case of the RAM, supporting such simple and understanding that they are actually parallel when applied models at the algorithmic and programming level will require to inputs in a random order. Blelloch is an ACM Fellow. superhuman achievements on the part of computer architects, some that have already been achieved. To be clear here, this does not mean that the instructions on the machine need to 3 Preliminary statement, by Bill Dally. match or even have an obvious mapping from the model. There Function and space-time mapping (F&M) for is plenty of room for languages to implement abstractions---for harmonious algorithms and architectures example, a scheduler that maps abstract tasks to actual processors. But it does mean there is some clear translation of It is easy to make architectures and algorithms mutually friendly costs from the model to the machine, even if not a perfect once we replace two obsolete concepts, centralized serial predictor of runtime. The goal, again, is to guide the algorithm program execution and the RAM or PRAM model of execution designer and programmer in the right direction. As with cost with new models that are aligned with modern extensions of the RAM, it is fine if the models have refinements technology. With the appropriate models we can design both that account for locality, for example, but these should be simple programmable and domain-specific architectures that are extensions. extremely efficient and easily understood targets for programmers. At the same time, we can design algorithms with At least for multicore machines, there are parallel models that predictable and high performance on these architectures. are simple, use simple constructs in programming languages, and support cost mappings down to the machine level that The laws of physics dictate that modern computing engines reasonably capture real performance. This includes the fork-join (programmable and special purpose) are largely communication work-depth (or work-span) model. There are even reasonably limited. Their performance and energy consumption are almost entirely determined by the movement of data. This includes memory access. Reading or writing a bit-cell is extremely fast such as map, reduce, gather, scatter, and shuffle can be used by and efficient. All the cost in accessing memory is data many programs to realize common communication patterns. movement. Arithmetic and logical operations are much less Programmers that don’t want to bother with mapping can use a expensive. In 5nm technology, an add costs about 0.5fJ/bit and a default mapper – with results no worse than with today’s 32-bit add takes about 200ps.
Recommended publications
  • Thomas E. Anderson
    Thomas E. Anderson November 2015 Personal Born in Orlando, Florida, August 28, 1961. Work Address: Home Address: 646 Allen Center 1201 18th Ave. E. Department of Computer Science and Engineering Seattle, WA 98112 University of Washington (206) 568{0230 Seattle, WA 98112 (206) 543-9348 [email protected] Research Operating systems, cloud computing, computer networks, computer security, local and Interests wide area distributed systems, high performance computer and router architectures, and education software, with a focus on the construction of robust, secure, and efficient computer systems. Education Ph.D. in Computer Science, 1991, University of Washington. Dissertation Title: Operating System Support for High Performance Multiprocessing, supervised by Profs. E.D. Lazowska and H.M. Levy. M.S. in Computer Science, 1989, University of Washington. A.B. cum laude in Philosophy, 1983, Harvard University. Professional Department of Computer Science and Engineering, University of Washington. Experience Warren Francis and Wilma Kolm Bradley Chair of Computer Science and Engineering, 2009 { present. Visiting Professor, Eidgenossische Technische Hoschschule Zurich (ETHZ), 2009. Department of Computer Science and Engineering, University of Washington. Professor, 2001 { 2009. Department of Computer Science and Engineering, University of Washington. Associate Professor, 1997 { 2001. Founder and Interim CEO/CTO, Asta Networks, 2000 - 2001 (on leave from UW). Computer Science Division, University of California, Berkeley. Associate Professor, 1996 { 1997. Computer Science Division, University of California, Berkeley. Assistant Professor, 1991 { 1996. Digital Equipment Corporation Systems Research Center. Research Intern, Fall, 1990. Thomas E. Anderson - 2 - November 2015 Awards USENIX Lifetime Achievement Award, 2014. USENIX Software Tools User Group Award (for PlanetLab), 2014. IEEE Koji Kobayashi Computers and Communications Award, 2012.
    [Show full text]
  • Ece Connections
    ECE2019/2020 CONNECTIONS BUILDING THE NEW COMPUTER USING REVOLUTIONARY NEW ARCHITECTURES Page 16 ECE CONNECTIONS DIRECTOR’S REFLECTIONS: ALYSSA APSEL Wishna Robyn s I write this, our Cornell scaling alone isn’t the answer to more community is adapting powerful and more efficient computers. to the rapidly evolving We can build chips with upwards of four conditions resulting from billion transistors in a square centimeter the COVID-19 pandemic. (such as Apple’s A11 chip), but when My heart is heavy with attempting to make devices any smaller the distress, uncertainty and anxiety this the electrical properties become difficult to Abrings for so many of us, and in particular control. Pushing them faster also bumps seniors who were looking forward to their up against thermal issues as the predicted last semesters at Cornell. temperatures on-chip become comparable I recognize that these are difficult to that of a rocket nozzle. times and many uncertainties remain, Does that mean the end of innovation very nature of computation by building but I sincerely believe that by working in electronics? No, it’s just the beginning. memory devices, algorithms, circuits, and together as a community, we will achieve Instead of investing in manufacturing devices that directly integrate computation the best possible results for the health and that matches the pace of Moore’s law, even at the cellular level. The work well-being of all of Cornell. Although we major manufacturers have realized highlighted in this issue is exciting in that are distant from each other, our work in that it is cost effective to pursue other it breaks the traditional separation between ECE continues.
    [Show full text]
  • Curriculum Vitae
    Caroline Trippel Assistant Professor of Computer Science and Electrical Engineering, Stanford University Stanford University Phone: (574) 276-6171 Computer Science Department Email: [email protected] 353 Serra Mall, Stanford, CA 94305 Home: https://cs.stanford.edu/∼trippel Education 2013–2019 Princeton University, PhD, Computer Science / Computer Architecture Thesis: Concurrency and Security Verification in Heterogeneous Parallel Systems Advisor: Prof. Margaret Martonosi 2013–2015 Princeton University, MA, Computer Science / Computer Architecture 2009–2013 Purdue University, BS, Computer Engineering PhD Dissertation Research Despite parallelism and heterogeneity being around for a long time, the degree to which both are being simultaneously deployed poses grand challenge problems in computer architecture regarding ensuring the accuracy of event orderings and interleavings in system-wide executions. As it turns out, event orderings form the cornerstone of correctness (e.g., memory consistency models) and security (e.g., speculation-based hardware exploits) in modern processors. Thus, my dissertation work creates formal, automated techniques for specifying and verifying the accuracy of event orderings for programs running on heterogeneous, parallel systems to improve their correctness and security. Awards and Honors [1] Recipient of the 2021 VMware Early Career Faculty Grant [2] Recipient of the 2020 CGS/ProQuest Distinguished Dissertation Award [3] Recipient of the 2020 ACM SIGARCH/IEEE CS TCCA Outstanding Dissertation Award [4] CheckMate chosen as an IEEE MICRO Top Pick of 2018 (top 12 computer architecture papers of 2018) [5] Selected for 2018 MIT Rising Stars in EECS Workshop [6] Selected for 2018 ACM Heidelberg Laureate Forum [7] TriCheck chosen as an IEEE MICRO Top Pick of 2017 (top 12 computer architecture papers of 2017) [8] NVIDIA Graduate Fellowship Recipient, Fall 2017–Spring 2018 [9] NVIDIA Graduate Fellowship Finalist, Fall 2016–Spring 2017 [10] Richard E.
    [Show full text]
  • Summer/Fall 2015 Newsletter
    INCREASING THE SUCCESS & PARTICIPATION OF WOMEN IN COMPUTING RESEARCH CRA-WomenSummer/Fall 2015 Edition NEWSLETTER This Issue: P2 Interview with Dilma Da Silva Highlight on Alum P5 Attend the Inaugural Virtual Undergraduate Sarah Ita Levitan Town Hall! Sarah Ita Levitan is a 3rd year PhD student in the Department of Computer Science at Columbia University. She is a member P7 Nomination Opportunities of the Spoken Language Processing group directed by Dr. Julia Hirschberg. Her research involves identifying spoken cues to deception and examining cultural and gender differences in P8 Alum News communicating and perceiving lies. She received her bachelor’s degree in computer science from Brooklyn College (CUNY). Before her P11-13 Awards senior year, she participated in the Distributed Research Experience for Undergraduates (DREU), studying acoustic and prosodic P14 News of Affiliated Groups entrainment in Supreme Court oral arguments. Sarah Ita is currently funded through an NSF-GRFP fellowship and is an IGERT fellow. continued on page 6 P14 Upcoming Events and Deadlines Natalie Enright Jerger wins Editors: Carla Ellis, Duke University Borg Early Career Award Amanda Stent, Yahoo Labs CRA-Women proudly announces Natalie Enright Jerger as this year’s BECA winner. Natalie is an associate professor in the Department of Electrical and Computer Engineering at the University of Toronto. Her research interests include multi- and many-core computer architectures, on-chip networks, cache coherence protocols, memory systems and approximate computing. Natalie is passionate about mentoring students and is deeply committed to increasing the participation of women in computer science and engineering. Natalie’s contributions to research have been recognized with several awards including the Ontario Ministry of Research and Innovation Early Researcher Award (2012), the Ontario Professional Engineers Young Engineer Medal (2014) and an Alfred P.
    [Show full text]
  • Computing Research Association Annual Report FY 2013-2014
    FY 2013-2014 ANNUAL REPORT Computing Research Association Annual Report FY 2013-2014 UNITING INDUSTRY, ACADEMIA AND GOVERNMENT TO ADVANCE COMPUTING RESEARCH AND CHANGE THE WORLD 1 FY 2013-2014 ANNUAL REPORT TaBLE OF CONTENTS Message From the Board Chair 3 Financial Statement 6 Highlights by Mission Area Leadership 7 Policy 11 Talent Development 13 CRA Members 24 Board of Directors 28 Staff 29 Committees and Joint Activities 30 OUR MISSION The mission of the Computing Research Association (CRA) is to enhance innovation by joining with industry, government and academia to strengthen research and advanced education in computing. CRA executes this mission by leading the computing research community, informing policymakers and the public, and facilitating the development of strong, diverse talent in the field. Founded in 1972, CRA’s membership includes more than 200 North American organizations active in computing research: academic departments of computer science and computer engineering, laboratories and centers (industry, government, and academia), and affiliated professional societies (AAAI, ACM, CACS/AIC, IEEE Computer Society, SIAM, and USENIX). Computing Research Association 1828 L St, NW, Suite 800 Washington, DC 20036 P: 202-234-2111 F: 202-667-1066 E: [email protected] W: www.cra.org 2 FY 2013-2014 ANNUAL REPORT MESSAGE FROM THE BOARD CHAIR I am delighted to report that 2013-14 was another very successful year for CRA’s efforts to strengthen research and advanced education in computing -- we have made substantial progress in our programs while ending the year on a positive financial note. The following report gives an excellent overview of CRA’s initiatives and activities in our three mission areas of leadership, talent development and policy during FY 2013-14.
    [Show full text]
  • Margaret R. Martonosi
    Margaret R. Martonosi Computer Science Bldg, Room 208 Email: [email protected] 35 Olden St. Phone: 609-258-1912 Princeton, NJ 08540 http://www.princeton.edu/~mrm Hugh Trumbull Adams ’35 Professor of Computer Science, Princeton University Assistant Director for Computer and Information Science and Engineering (CISE) at National Science Foundation. (IPA Rotator). Andrew Dickson White Visiting Professor-At-Large, Cornell University Associated faculty, Dept. of Electrical Engineering; Princeton Environmental Institute, Center for Information Technology Policy, Andlinger Center for Energy and the Environment. Research areas: Computer architectures and the hardware/software interface, particularly power-aware computing and mobile networks. HONORS IEEE Fellow. “For contributions to power-efficient computer architecture and systems design” ACM Fellow. “For contributions in power-aware computing” 2016-2022: Andrew Dickson White Visiting Professor-At-Large. Cornell University. Roughly twenty people worldwide are extended this title based on their professional stature and expertise, and are considered full members of the Cornell faculty during their six-year term appointment. 2019 SRC Aristotle Award, for graduate mentoring. 2018 IEEE Computer Society Technical Achievement Award. 2018 IEEE International Conference on High-Performance Computer Architecture Test-of-Time Paper award, honoring the long-term impact of our HPCA-5 (1999) paper entitled “Dynamically Exploiting Narrow Width Operands to Improve Processor Power and Performance” 2017 ACM SenSys Test-of-Time Paper award, honoring the long-term impact of our SenSys 2004 paper entitled “Hardware Design Experiences in ZebraNet”. 2017 ACM SIGMOBILE Test-of-Time Paper Award, honoring the long-term impact of our ASPLOS 2002 paper entitled “Energy-Efficient Computing for Wildlife Tracking: Design Tradeoffs and Early Experiences with ZebraNet”.
    [Show full text]
  • David M. Brooks
    David M. Brooks School of Engineering and Applied Sciences [email protected] Maxwell-Dworkin Laboratories, Room 141 www.eecs.harvard.edu/∼dbrooks/ 33 Oxford Street Phone: (617) 495-3989 Cambridge, MA 02138 Fax: (617) 495-2489 Education Princeton University. Doctor of Philosophy in Electrical Engineering, 2001. Princeton University. Master of Arts in Electrical Engineering, 1999. University of Southern California. Bachelor of Science in Electrical Engineering, 1997. Academic and Professional Experience John L. Loeb Associate Professor of the Natural Sciences, School of Engineering and Applied Sciences, Harvard University (July 2007–Present). Associate Professor of Computer Science, School of Engineering and Applied Sciences, Harvard University (July 2006–Present). • Areas of Interest: Computer Architecture, Embedded and High-Performance Computer System Design. • Pursuing research in computer architectures and the hardware/software interface, particularly systems that are designed for technology constraints such as power dissipation, reliability, and design variability. Assistant Professor of Computer Science, Division of Engineering and Applied Sciences, Har- vard University (September 2002-July 2006). Research Staff Member, IBM T.J. Watson Research Center, (September 2001–September 2002). • Conducted analysis of the optimal power-performance pipeline depth of microprocessors. • Developed IBM PowerTimer toolkit for architectural power-performance modeling. • Assisted in concept phase and high-level design phase power-performance analysis for the Sony- Toshiba-IBM Cell Processor and the IBM POWER6 microprocessor. Research Assistant, Princeton University, (July 1997–September 2001). • Investigated the potential for dynamic thermal management in microprocessor designs. • Developed the Wattch architectural power-modeling toolkit. • Investigated the potential for narrow bitwidth optimizations for power-performance optimiza- tions in ALUs and functional units.
    [Show full text]
  • Dynamic-Compiler-Driven Control for Microprocessor Energy and Performance
    DYNAMIC-COMPILER-DRIVEN CONTROL FOR MICROPROCESSOR ENERGY AND PERFORMANCE A GENERAL DYNAMIC-COMPILATION ENVIRONMENT OFFERS POWER AND PERFORMANCE CONTROL OPPORTUNITIES FOR MICROPROCESSORS. THE AUTHORS PROPOSE A DYNAMIC-COMPILER-DRIVEN RUNTIME VOLTAGE AND FREQUENCY OPTIMIZER. A PROTOTYPE OF THEIR DESIGN, IMPLEMENTED AND Qiang Wu DEPLOYED IN A REAL SYSTEM, ACHIEVES ENERGY SAVINGS OF UP TO 70 PERCENT. Margaret Martonosi Energy and power have become pri- apply to other control means as well. Douglas W. Clark mary issues in modern processor design. A dynamic compiler is a runtime software Processor designers face increasingly vexing system that compiles, modifies, and optimizes Princeton University power and thermal challenges, such as reduc- a program’s instruction sequence as it runs. ing average and maximum power consump- Examples of infrastructures based on dynam- tion, avoiding thermal hotspots, and ic compilers include the HP Dynamo,4 the Vijay Janapa Reddi maintaining voltage regulation quality. In IBM DAISY (Dynamically Architected addition to static design time power manage- Instruction Set from Yorktown),5 Intel Dan Connors ment techniques, dynamic adaptive tech- IA32EL6 and the Intel PIN.7 niques are becoming essential because of an Figure 1 shows the architecture of a gener- University of Colorado at increasing gap between worst-case and aver- al dynamic-compiler system. A dynamic-com- age-case demand. pilation system serves as an extra execution Boulder In recent years, researchers have proposed layer between the application binary and the and studied many low-level hardware tech- operating system and hardware. At runtime, niques to address fine-grained dynamic power a dynamic-compilation system interacts with Youfeng Wu and performance control issues.1-3 At a high- application code execution and applies possi- er level, the compiler and the application can ble optimizations to it.
    [Show full text]
  • CASC Fall 2020 Meeting VIRTUAL Zoom Meeting October 14-16, 2020 CASC Officers
    CASC Fall 2020 Meeting VIRTUAL Zoom Meeting October 14-16, 2020 CASC Officers Chair Dr. Sharon Broude Geva Director, Advanced Research Computing, University of Michigan [email protected] Vice Chair Mr. Neil Bright Director, Partnership for an Advanced Computing Environment, [email protected] Georgia Institute of Technology Acting Dr. Claudia Costa Secretary Pervasive Technology Institute, Indiana University [email protected] Treasurer Dr. Scott Yockel Research Computing Officer, Harvard University [email protected] Past Chair Dr. Rajendra Bose Director, Research Computing, Columbia University [email protected] Director Ms. Lisa Arafune Director, CASC [email protected] Speakers Ms. Jayshree Sarma Director, Research Computing, Office of Research Computing, George Mason University [email protected] Mr. Joel Zysman Director of Advanced Computing, University of Miami Joel Zysman is the Director of Advanced Computing for the University of Miami’s Center for Computational Science. Mr. Zysman has been an active member of the advanced computing community for over 15 years. He has held positions in research centers at The Scripps Research Institute (Florida) and in Industry at Merck Research Labs, Silicon Graphics Inc., and Cray Research Inc. He has published numerous articles on IO subsystems as well as advanced computing applications in research. He has more than 15 years’ experience in advanced computing systems and application design and 1 CASC Fall 2020 Meeting implementation and specializes in storage design, data management and movement, and distributed computing in bioinformatics and life sciences. Mr. Zysman has a particular interest in bringing advanced computing technologies to non-traditional user communities and has started a joint research program with the Frost School of Music in Computational Musicology and Music Engineering.
    [Show full text]
  • 2018-2019 Candidates
    List ordered alphabetically. Please see next few pages for more details/short bio. Ana Klimovic Caroline Trippel Stanford University Princeton University Advisor: Christos Kozyrakis Advisor: Margaret Martonosi Research Interests: ​Computer Research Interests: ​Computer Systems, Cloud Computing, architecture, security, formal Distributed Systems, Computer Architecture verification, memory consistency models Divya Mahajan Esha Chousaka Georgia Institute of Technology University of Texas, Austin Advisor: Hadi Esmaeilzadeh Advisor: Mattan Erez Research Interests: Computer Research Interests: Memory ​ ​ Architecture, Data Management Systems, Compression, Technologies, Machine Learning, Interconnect, Deep Learning Programming Languages Memory Optimizations Lisa Wu Mengjia Yan UC Berkeley University of Illinois, Postdoc Sponsor: Krste Asanovic; Urbana-Champaign PhD Advisor: Martha Kim Advisor: Josep Torrelas Research Interests: Computer Research Interests: ​ architecture and Hardware security, microarchitecture, accelerators, energy-efficient cache-based side channel computing, big data, precision medicine. attacks and countermeasures, deep neural network accelerators Nandita Vijaykumar Tamara Lehman Carnegie Mellon University Duke University Advisor: Onur Mutlu, Phillip B. Gibbons Advisor: Benjamin Lee, Andrew Research Interests: Computer Hilton ​ architecture; programming Research Interests: Computer ​ abstractions; memory systems; GPUs architecture, secure hardware, memory systems. ​ Yihan Sun Xuan Yang Carnegie Mellon University Stanford
    [Show full text]
  • Kelly A. Shaw C
    Kelly A. Shaw C. V. Page 1 of 10 Kelly A. Shaw Associate Professor Work: (413) 597-2772 Computer Science Department [email protected] Williams ColleGe http://cs.williams.edu/~kshaw/ 47 Lab Campus Dr. Williamstown, MA 01267 Interests: Computer architecture, heterogeneous multiprocessors, GPGPUs, IoT, and workload characterization Education: Stanford University Ph.D. in Computer Science with Distinction in Teaching, 2005 Dissertation: Resource Management in Single-Chip Multiprocessors Advisor: William J. Dally M.S. in Computer Science, conferred 2001 Duke University B.S., Summa cum Laude, Computer Science/Economics, 1997 Professional Experience: Williams ColleGe: Associate Professor (July 2019-Present) University of Richmond: Associate Professor (February 2010-June 2019) Reed ColleGe: Associate Professor (AuGust 2017-December 2018) University of Richmond: Assistant Professor (AuGust 2004-February 2010) Stanford University: Research Assistant (1997-2004) Ph.D. student in Department of Computer Science, Concurrent VLSI Architecture group IBM T. J. Watson Research Center: Research Intern (Summer 1998) Created rules for optimizing arbitrary information flow graphs in content-based, publish- subscribe messaging system Duke University: Student Researcher (Summer 1997) Re-wrote the CProf cache profiling system for use on machines running Solaris CRA-W Distributed Mentor Project: Research Intern (Summer 1996) Analyzed impact of different processor cache configurations on application performance Teaching Experience: Williams ColleGe • CSCI 237:
    [Show full text]
  • Spring 2021 Speaker Bios.Docx
    CASC Spring 2021 Membership Meeting VIRTUAL Zoom Meeting April 6-9, 2021 CASC Executive Officers Chair Mr. Wayne Figurelle The Pennsylvania State University Vice Chair Dr. James Wilgenbusch University of Minnesota Secretary Dr. Jennifer Schopf Indiana University Treasurer Mr. Dave Hart National Center for Atmospheric Research Past Chair Dr. Sharon Broude Geva University of Michigan Speakers Mr. Jeff Dalton Vice Chairman, Board of Directors; Chair, Accreditation & Credentialing Committee; Board of Directors, CMMC Accreditation Body Jeff Dalton is President and CEO of Broadsword and Chief Evangelist with AgileCxO.org. He is a Certified CMMI Lead Appraiser and AgileCxO Lead Assessor, and author of Great Big Agile: An OS for Agile Leaders. He is principle author of the CMMI Institute’s “Guide to Scrum and CMMI: Improving Agile Performance with CMMI” and was the first, third, and fourth Chairman of the CMMI Institute Partner Advisory Board, where he led the group through their transitionary period from the Software Engineering Institute, where he was Vice Chairman and later Chairman of their Partner Advisory Board. Dalton has a background in technology leadership, and has served as Director and VP or Product Development and Quality with multiple companies, and was a Senior Manager and Chief Technology Executive at Ernst and Young, LLP. Jeff is an active jazz bassist who builds experimental aircraft, and lives with his wife of 25 years in Marathon, FL. 1 CASC Spring 2021 Membership Meeting Ms. Carolyn Ellis Program Manager, Purdue University Carolyn Ellis is a Program Manager at Purdue University focusing on strengthening their Regulated Research Program. Over the last four years, she has grown the program from a single project to a thriving ecosystem handling various regulations such as HIPAA and NIST 800-171.
    [Show full text]