Conference on File and Storage Technologies (FAST '02)

Total Page:16

File Type:pdf, Size:1020Kb

Conference on File and Storage Technologies (FAST '02) THE MAGAZINE OF USENIX & SAGE April 2002 • Volume 27 • Number 2 inside: CONFERENCE REPORTS CONFERENCE ON FILE AND STORAGE TECHNOLOGIES (FAST ‘02) & The Advanced Computing Systems Association & The System Administrators Guild conference reports This issue’s reports focus on on the Conference on File and ing: “We need disk fingers,”said Morris. Conference on File and Storage Tech- Storage Technologies He went on to introduce microelectro- mechanical systems, or MEMS-based nologies (FAST 2002) held in Monterey, MONTEREY, CALIFORNIA devices. One MEMS device would con- California, January 28-30, 2002. JANUARY 28-30, 2002 tain many read/write heads operating in OUR THANKS TO THE SUMMARIZERS: parallel on a single media surface. IBM KEYNOTE I has produced a prototype of such a STORAGE: FROM ATOMS TO PEOPLE device, called “Millipede,”that uses Ismail Ari Robert Morris, IBM Almaden Research array-heated heads to make pits in a Center polymer media surface. Scott Banachowski Summarized by Zachary Peterson Zachary Peterson Morris concluded by charging the Dr. Morris began by defining the impor- attending researchers of futuristic stor- tance and motivation of the FAST con- age to consider an ideal case where stor- ference. Storage is getting faster and age devices will be self-organizing, larger. In fact, it has increased by 14 self-optimizing, and self-protecting. He orders of magnitude. However, these believes the IBM IceCube is the begin- increases are only interesting when they ning of such devices. Many IceCubes are aid computer scientists. Morris asserted placed physically contiguous with each that “storage determines the way we use other in three dimensions, reducing the computers” and, therefore, is a technol- space needed to manage a large storage ogy worthy of investigation, the most array. When an IceCube fails, it is simply important existing technology being the left in the structure, letting the other hard disk drive. devices recover around it. This is the Morris enumerated the challenges that first step IBM Research is making face the disk drive and how IBM toward self-managing storage, and they Research plans to addresses them. The hope to continue this trend through an greatest of these challenges is the hard, ideology called “autonomic computing.” physical limit at which the magnetic This concept transcends storage and will properties used to store data no longer affect all levels of context-based com- hold, called the superparamagnetic limit puting. In general, researchers need to – a limit that has been passed and re- move toward an environment where sys- predicted a few times. IBM has pushed tems should be easy to use and easy to this limit out by various means of maintain for the end user, while still manipulating the physical organization providing the performance and capacity of the magnetic media. Making the bits gains seen in the past. more square and smaller, combined with a layering of magnetic substrates, SESSION: SECURE STORAGE enables current production drives to Summarized by Zachary Peterson achieve greater capacities with a higher STRONG SECURITY FOR NETWORK-ATTACHED signal-to-noise ratio. IBM hopes to con- STORAGE tinue this trend in their future produc- Ethan Miller, Darrell Long, University of tion disks by reducing the size of bits to California, Santa Cruz; William Free- a single grain and by utilizing electron man, TRW; Benjamin Reed, IBM beam lithography to create very small Research and accurate components. Ethan Miller presented a set of security IBM also looks beyond the standard disk protocols to provide for an on-disk drive architecture, and the limitations method of securing data in a network- inherent in such a design, for the future attached storage system. Even someone of storage. The disk arm is too confin- who absconds with a disk using strong 70 Vol. 27, No. 2 ;login: security cannot gain access to the data. work is especially useful for comparing and effective method of storing keys. For Additionally, the presence of an authen- aspects of security and performance for more information, refer to http://identis- EPORTS tication scheme means that maliciously various methods of security. Riedel then cape.stanford.edu/. R changed data can be detected. showed some trace-driven simulator results that, when applied to the com- SESSION: PERFORMANCE AND Miller presented three schemes of secu- mon framework, illustrate that encrypt- MODELING ONFERENCE rity, each offering higher levels of pro- C on-disk systems are a preferred method tection with slightly decreased system Summarized by Scott Banachowski of security over encrypt-on-wire, pro- performance. In scheme 1, each block is WOLF – A NOVEL REORDERING WRITE viding the best security for the least secured using public-key encryption and BUFFER TO BOOST THE PERFORMANCE OF effort. The framework and the analysis signed using a hash function. Scheme 2 LOG-STRUCTURED FILE SYSTEMS can be applied to answer questions extends this model to include an HMAC Jun Wang and Yiming Hu, University of beyond this particular result and to dif- for added authentication and security Cincinnati ferent environments. but increases processing time at the Log-structured file systems make good client and the server. Scheme 3 avoids ENABLING THE ARCHIVAL STORAGE OF use of disk bandwidth by combining using the slow public-key encryption SIGNED DOCUMENTS several writes into a single sequential methods used in schemes 1 and 2, and Petros Maniatis and Mary Baker, disk access. However, one shortcoming replaces them with a secure keyed-hash Stanford University of log-structured file systems is the over- approach. Results of these three schemes Consider a situation where two people head incurred from cleaning. Cleaning is compared to a baseline system with no agree to a contract, the contract is digi- the process of reclaiming space in a seg- security showed that the public-key tally signed by each person, and ment occupied by obsolete blocks; by encryption schemes suffer significantly archived. Significantly later, one of the rewriting the segment’s live blocks to the in sequential I/O operations. However, signers challenges the contract. What log, the entire segment is freed. the last scheme shows only slightly problems arise with the passage of time? Jun Wang presented a method (called degraded performance, about 1% to Petros Maniatis addressed these issues, WOLF) for reducing the cleaning over- 20% degradation, compared to the base- providing one possible solution that head of log-structured file systems. The line. This work demonstrates that on- extends traditional archival storage to key idea comes from the observation disk security and authentication for support archiving of long-term con- that file accesses form a bimodal distri- network-attached storage can be tracts. bution: some files are repeatedly rewrit- achieved efficiently using a keyed-hash ten while others rarely change. If the approach. As time passes, issues arise that make it difficult to ensure the long-term validity bimodal distribution of data is classified A FRAMEWORK FOR EVALUATING STORAGE of signed data, the sensitivity of keys when written to disk, each type of data SYSTEM SECURITY being the most outstanding issue. Keys can be stored in separate segments. Over Erik Riedel, Mahesh Kallahalla, and Ram are lost, names are changed, and digital time, segments of rewritten data will Swaminathan, HP Labs certificates expire. This issue begs two have almost all their blocks quickly Erik Riedel asserted that there is a need questions: “Can one trust a 30-year-old invalidated, and segments of infre- for a quantitative evaluation of storage signature key?” and “How does one ver- quently modified data will accumulate security. This is because storage has ify such a signature?” Maniatis intro- few holes. unique propreties that differentiate it duced KASTS, a key archival service that WOLF uses an adaptive grouping algo- from other security applications, such as uses time stamping and timed storage of rithm to identify active and inactive networks. Propreties such as sharing, keys as an answer to these questions. data, and assigns the data into separate distribution, and persistence make KASTS uses two main components, a log segments. Using this method, rewrit- applying network security ideas unsatis- Time-Stamping Server (TSS) and a Key ten data may be reordered into a factory. He went on to develop a frame- Archival Service (KAS), to establish a bimodal distribution of segments, leav- work of security variables, such as user time of signing and an effective method ing little work for the cleaner. The algo- operations, encryption methods, and for verifying old signatures. KASTS uses rithm tracks segment buffer block attacks, that when permuted, expose the a versioned and balanced tree for the accesses with reference counters for a benefits and drawbacks for categories of public keys of signatures. Maniatis time-window of initialized 10 minutes existing storage security. This frame- argued that this structure is a feasible April 2002 ;login: FAST 2002 71 to determine which kind of segment the model with a network model. The simu- responsible for simulating the device on data belongs to. lation, configured 16 disk RAIDs, was a bus and translating bus signals to sim- fed a synthetic workload and a Web ulator requests; the data manager uses a Wang described the performance of a server trace. Forney found that their RAM-based cache to hold the data WOLF implementation adapted from implementation performed similarly to stored on the device; and the timing the Sprite LFS source. The metric used LANDLORD, a performance-compari- manager keeps the system state, timing in measurements was overall write cost, son algorithm rather than an implemen- info, and the simulation engine. Obvi- a value that incorporates garbage collec- tation comparison. The simulation ous limitations of a TASE system is that tion overhead by including the expense showed that their policy alleviated dra- it must be capable of responding to of reading and rewriting cleaned blocks.
Recommended publications
  • Weak-Consistency Group Communication and Membership
    UNIVERSITY OF CALIFORNIA SANTA CRUZ Weak-consistency group communication and membership A dissertation submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in COMPUTER AND INFORMATION SCIENCES by Richard Andrew Golding December 1992 The dissertation of Richard Andrew Golding is approved: Prof. Darrell Long Prof. Charles McDowell Dr. Kim Taylor Dr. John Wilkes Dean of Graduate Studies and Research Copyright c by Richard Andrew Golding 1992 iii Contents Abstract ix Acknowledgments x 1 Introduction 1 1.1 Requirements :: ::: ::: ::: :: ::: ::: ::: ::: ::: :: ::: :: 3 1.2 Using replication : ::: ::: ::: :: ::: ::: ::: ::: ::: :: ::: :: 4 1.3 Group communication mechanism : :: ::: ::: ::: ::: ::: :: ::: :: 5 1.4 Weak consistency ::: ::: ::: :: ::: ::: ::: ::: ::: :: ::: :: 6 1.5 The Refdbms 3.0 system ::: ::: :: ::: ::: ::: ::: ::: :: ::: :: 8 1.6 Conventions in the text : ::: ::: :: ::: ::: ::: ::: ::: :: ::: :: 9 1.7 Organization of the dissertation :: :: ::: ::: ::: ::: ::: :: ::: :: 9 2 Terms and definitions 11 2.1 Consensus : ::: ::: ::: ::: :: ::: ::: ::: ::: ::: :: ::: :: 11 2.2 Principals : ::: ::: ::: ::: :: ::: ::: ::: ::: ::: :: ::: :: 12 2.3 Time and clocks : ::: ::: ::: :: ::: ::: ::: ::: ::: :: ::: :: 13 2.4 Network :: ::: ::: ::: ::: :: ::: ::: ::: ::: ::: :: ::: :: 14 2.5 Network services : ::: ::: ::: :: ::: ::: ::: ::: ::: :: ::: :: 15 3 A framework for group communication systems 16 3.1 The framework :: ::: ::: ::: :: ::: ::: ::: ::: ::: :: ::: :: 17 3.2 The application :: ::: ::: :::
    [Show full text]
  • Interview with Darrell Long RIK FARROWFILE SYSTEMS
    Interview with Darrell Long RIK FARROWFILE SYSTEMS Dr. Darrell D. E. Long is he Intel/Micron announcement of XPoint 3D in July 2015 really got Professor of Computer Science my attention [1]: finally, a vendor will start shipping a form of non- at the University of California, volatile memory (NVM) that’s not NAND flash. XPoint 3D promises Santa Cruz. He holds the Kumar T Malavalli Endowed Chair of to be byte addressable, faster, more durable, and require lower power than Storage Systems Research and is Director of any form of flash today. The downsides (there are always downsides) will be the Storage Systems Research Center. His that XPoint 3D will be more expensive and have less storage capacity when it broad research interests include many areas appears in early 2016. of mathematics and science, and in the area of computer science include data storage Having byte-addressable NVM will have impacts on the way computers are designed and systems, operating systems, distributed operating systems and software are written. If this technology proves to be everything that computing, reliability and fault tolerance, and Intel and Micron are promising, it might change everything about the systems we are famil- computer security. He is currently Editor-in- iar with. At the very least, XPoint 3D would become a new tier in the storage hierarchy. Chief of ACM Transactions on Storage. I asked around, trying to find someone I knew in our community who could address this [email protected] topic from a file system and storage perspective. The timing was terrible, as all of the people I asked (who responded) were busy preparing FAST ’16 papers for submission, and with two Rik Farrow is the editor of ;login:.
    [Show full text]
  • Pagoda: a Hybrid Approach to Enable Efficient Real-Time Provenance Based Intrusion Detection in Big Data Environments
    1 Pagoda: A Hybrid Approach to Enable Efficient Real-time Provenance Based Intrusion Detection in Big Data Environments Yulai Xie, Member, IEEE, Dan Feng, Member, IEEE, Yuchong Hu, Member, IEEE, Yan Li, Member, IEEE, Staunton Sample, Darrell Long, Fellow, IEEE Abstract—Efficient intrusion detection and analysis of the security landscape in big data environments present challenge for today’s users. Intrusion behavior can be described by provenance graphs that record the dependency relationships between intrusion processes and the infected files. Existing intrusion detection methods typically analyze and identify the anomaly either in a single provenance path or the whole provenance graph, neither of which can achieve the benefit on both detection accuracy and detection time. We propose Pagoda, a hybrid approach that takes into account the anomaly degree of both a single provenance path and the whole provenance graph. It can identify intrusion quickly if a serious compromise has been found on one path, and can further improve the detection rate by considering the behavior representation in the whole provenance graph. Pagoda uses a persistent memory database to store provenance and aggregates multiple similar items into one provenance record to maximumly reduce unnecessary I/O during the detection analysis. In addition, it encodes duplicate items in the rule database and filters noise that does not contain intrusion information. The experimental results on a wide variety of real-world applications demonstrate its performance and efficiency. Index Terms—Provenance, Intrusion Detection, Big Data, Real-time F 1 INTRODUCTION OST-BASED intrusion detection has long been an im- environments, where the intruders’ illegal behavior data H portant measure to enforce computer security.
    [Show full text]
  • Curriculum Vitae
    Cumulative Biobibliography University of California, Santa Cruz September 8, 2021 DARRELL DON EARL LONG Distinguished Professor of Engineering Kumar Malavalli Endowed Chair Professor in Storage Systems Research Employment History 2019– Director, Center for Research Storage Systems, University of California, Santa Cruz 2017– Distinguished Professor of Engineering, University of California, Santa Cruz 2005– Kumar Malavalli Endowed Chair Professor in Storage Systems Research 2004–10 Associate Dean for Research and Graduate Studies, Jack Baskin School of Engineering, University of California, Santa Cruz 2001–19 Director, Storage Systems Research Center, University of California, Santa Cruz 1999–17 Professor, Computer Science, University of California, Santa Cruz 1998–01 Associate Dean, Jack Baskin School of Engineering, University of California, Santa Cruz 1994–99 Associate Professor, Computer Science, University of California, Santa Cruz 1988–94 Assistant Professor, Computer Science, University of California, Santa Cruz 1986–88 Research Assistant, Computer Science & Engineering, University of California, San Diego 1985–87 Teaching Associate, Computer Science & Engineering, University of California, San Diego 1984–87 Lecturer in Mathematics, Department of Mathematical Sciences, San Diego State University 1981–84 Systems Programmer, University Computer Center, San Diego State University Visitor History 2019 (Winter) Professeur Invité, Sorbonne Université 2016– Associate Member, European Organization for Nuclear Research (CERN) 2016 (Fall)
    [Show full text]
  • University of California Santa Cruz Data Management For
    UNIVERSITY OF CALIFORNIA SANTA CRUZ DATA MANAGEMENT FOR SHINGLED MAGNETIC RECORDING DISKS A dissertation submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in COMPUTER SCIENCE by Rekha Pitchumani December 2015 The Dissertation of Rekha Pitchumani is approved: Ethan L. Miller, Chair Darrell D. E. Long Ahmed Amer Tyrus Miller Vice Provost and Dean of Graduate Studies Copyright © by Rekha Pitchumani 2015 Table of Contents List of Figures v List of Tables viii Abstract ix Dedication xi Acknowledgments xii 1 Introduction 1 1.1 Thesis Contributions . .3 1.2 Key-Value Data Management for SMR Disks . .4 1.3 Realistic Request Arrival Generation . .5 1.4 Measuring Compaction Overhead and Its Mitigation . .6 1.5 Organization . .7 2 Background and Related Work 9 2.1 Shingled Magnetic Recording . 10 2.1.1 Role in Future Disks . 12 2.1.2 Shingled Bands . 12 2.1.3 Comparison to Similar Devices . 13 2.1.4 Drive Types and Interfaces . 14 2.1.5 Alternate Data Management Approaches . 19 2.1.6 Metadata Structures . 25 2.2 Key-Value Storage Systems . 26 2.3 Motivation for Realistic Request Arrivals in Benchmarks . 27 2.3.1 State of the Practice . 28 2.3.2 Related Work . 29 2.4 Mitigating Background Activity Overhead . 30 2.5 Hybrid Storage Devices . 31 iii 3 SMRDB: Key-Value Data Store for SMR Disks 33 3.1 Data Access Model and Management . 33 3.2 Design . 35 3.2.1 Log Structured Data Management . 35 3.2.2 Data Access Operations . 37 3.2.3 Background Operations .
    [Show full text]
  • Predicting File System Actions from Reference Patterns
    UNIVERSITY of CALIFORNIA SANTACRUZ PREDICTING FILE SYSTEM ACTIONS FROM REFERENCE PATTERNS A thesis submitted in partial satisfaction of the requirements for the degree of MASTER OF SCIENCE in COMPUTER ENGINEERING by Thomas M. Kro eger Decemb er 1996 The thesis of Thomas M. Kro eger is approved: Prof. Darrell D. E. Long, Chair Prof. Glen G. Langdon, Jr. Dr. Richard A. Golding Dean of Graduate Studies c Copyright by Thomas M. Kro eger 1996 Predicting File System Actions from Reference Patterns Thomas M. Kro eger Abstract Most mo dern cache systems treat each access as an indep endentevent. Events in a computer system, however are driven by programs. Thus, accesses o ccur in distinct sequences and are by no means random. The result is that mo dern caches ignore useful information. In a Unix le system, when the executable for the program make is referenced, it is likely that references to les suchascc, as, and ld will so on follow. To successfully track the le reference patterns, a mo del must eciently handle the large numb er of distinct les, as well as the constant addition and removal of les. To further complicate matters, the p ermanent nature of a le system requires that this mo del b e able to run continuously, forcing any realistic mo del to address issues of long-term mo del buildup and adapting to changing patterns. Wehave adapted a multi-order context mo deling technique used in the data com- pression metho d Prediction by Partial Match PPM to track le reference patterns. We then mo di ed this context mo deling technique to eciently handle large numb ers of distinct les, and to adapt to areas of lo cal activity.From this mo del, we are able to determine le system accesses that have a high probability of o ccurring next.
    [Show full text]
  • Caching Files with a Program-Based Last N Successors Model
    Caching Files with a Program-based Last n Successors Model Tsozen Yeh, Darrell D. E. Long, and Scott A. Brandt Computer Science Department Jack Baskin School of Engineering University of California, Santa Cruz Abstract able cache space and disk bandwidth. Incorrect prediction can also prolong the time required to bring needed data Recent increases in CPU performance have outpaced in- into the cache if a cache miss occurs while the incorrectly creases in hard drive performance. As a result, disk opera- predicted data is being transferred from the disk. Incor- tions have become more expensive in terms of CPU cycles rect predictions can lower the overall performance of the spent waiting for disk operations to complete. File pre- system regardless of the accuracy of correct prediction. diction can mitigate this problem by prefetching files into Consequently reducing incorrect prediction plays a very cache before they are accessed. However, incorrect pre- important role in cache management. diction is to a certain degree both unavoidable and costly. In previous work we examined a file prediction model We present the Program-based Last N Successors (PLNS) called Program-based Last Successor (PLS) [17, 18], in- file prediction model that identifies relationships between spired by the observation that probability and repeated files through the names of the programs accessing them. history of file accesses do not occur for no reason. In par- Our simulation results show that PLNS makes at least ticular, programs access more or less the same set of files 21.11% fewer incorrect predictions and roughly the same in roughly the same order every time they execute.
    [Show full text]
  • Conference Organizers Hotel & Registration
    REGISTER BY MARCH 7, 2003, AND SAVE! Join leading researchers and technologists for three days of active discussion on operating systems, computer archi- tecture, distributed systems, networking, mobile computing, and computational science. The FAST program features: • 18 technical papers carefully selected from 67 submis- • Dave Belanger, Chief Scientist, AT&T Research; EMC’s sions. Topics range from RAID design to secure wide- David Black, IETF chair for Internet Storage; CMU’s area file sharing. Garth Gibson, founder and CTO of Panasas; Steve • Keynote address by Dr. John Wilkes of HP Labs, HP Kleiman, CTO of Network Appliance; Reagan Moore, Fellow and ACM Fellow with 15 years of research lead- associate director of Data-Intensive Computing at the ership in self-managing large-scale storage. San Diego Supercomputer Center; and Tom Ruwart, I/O Performance, Inc. CONFERENCE ORGANIZERS HOTEL & REGISTRATION Program Chair Hotel Information Jeff Chase, Duke University Hotel Reservation Discount Deadline: March 7, 2003 Program Committee Cathedral Hill Hotel Khalil Amiri, IBM Research 1101 Van Ness Remzi Arpaci-Dusseau, University of Wisconsin, Madison San Francisco, CA 94109 Peter Chen, University of Michigan Local Telephone: 415.776.8200 Peter Corbett, Network Appliance Toll-free: 1.800.622.0855 Mike Franklin, University of California, Berkeley Rate: $129 single/double, plus tax Eran Gabber, Lucent Technologies, Bell Labs All requests for reservations received after the deadline Greg Ganger, Carnegie Mellon University will be handled on a space-available basis. Peter Honeyman, CITI, University of Michigan Technical Session Registration Fees Frans Kaashoek, Massachusetts Institute of Technology Online Early Bird Rates (Register online by March 7, 2003) Darrell Long, University of California, Santa Cruz Member: $695 Erik Riedel, Seagate Research Nonmember: $805 Margo Seltzer, Harvard University Student Member: $260 Keith A.
    [Show full text]
  • 2012 Ipccc Program Schedule
    IPCC C.ORG ST IEEE 31 INTERNATIONAL PERFORMANCE , COMPUTING AND COMMUNICATIONS CONFERENCE PROGRAM GUIDE IEEE IPCCC 2012 AUSTIN , T EXAS , USA DECEMBER 1-3, 2012 THE INTERNATIONAL PERFORMANCE , C OMPUTING , AND COMMUNICATIONS CONFERENCE IS THE PREMIER IEEE CONFERENCE PRESENTING RESEARCH IN THE PERFORMANCE OF COMPUTER AND COMMUNICATION SYSTEMS . FOR MORE THAN THREE DECADES , IPCCC HAS BEEN A RESEARCH FORUM FOR ACADEMIC , INDUSTRIAL AND GOVERNMENT RESEARCHERS . MESSAGE FROM THE IPCCC 2012 G ENERAL CO-C HAIRS t is our great pleasure to welcome you to the 2012 IEEE International Performance, Computing, aInd Communications Conference (IPCCC 2012). For more than thirty years, IPCCC has been a major forum for students and professionals from academia, industry and government to exchange IEEE IPCCC 2012 exciting research results in performance, computers, and communications. This year’s program continues our tradition. It consists of two interesting and informative keynote talks, a main conference with two parallel tracks and a poster session. The conference wouldn’t exist without the full support from the Organizing Committee, the careful paper reviews provided by the Technical Program Committee members and external referees. We thank the TPC Co -Chairs Song Fu and Xiao Qin for their hard work in the selection of papers for the conference this year. We also would like to thank all the authors who have supported and contributed to IPCCC with their submissions. Finally, we are grateful to the IEEE Computer Society for its continuing sponsorship of this forum. On behalf of the conference committee, we extend you our warm welcome to the event in Austin, Texas! We hope you will enjoy the conference as well as the city.
    [Show full text]
  • A Weak-Consistency Architecture for Distributed Information Services 381 Temporarily Fail and Recover
    A We ak- C onsistency Archite cture for Distributed Information Services Richard A. Golding University of California, Santa Cruz ABSTRACT Services provided on wide-area networks like the Internet present several challenges. The reli- ability, performance, and scalability expected of such services often requires they be implemented using mul- tiple, replicated servers. One possible architecture im- plements the replicas as a weak-consistency process group. This architecture provides good scalability and availability, handles portable computer systems, and minimizes the effect of users upon each other. The key principles in this architecture are component indepen- dence, a process group protocol that provides small summaries of database contents, caching database slices, and the quorum multicast client-to-server com- munication protocol. A distributed bibliographic data- base system servss as an example. @ Computing Systems, Vol. 5 . No. 4 . Fall1992 379 1. Introduction Several information services have recently been, or will soon be, made available on the Internet. These services provide access to specialized information from any Internet host. The bulk of these systems central- ize some parts of their service-either by centralizing the entire sys- tem, or by breaking the service into several pieces and implementing each piece as a centralized application. The ref dbms bibliographic database system has been built using a new architecture for distributed information services. This architecture emphasizes scalability and fault tolerance, so the application can re- spond gracefully to changes in demand and to site and network failure. It uses weak-consistency replication techniques to build a flexible dis- tributed service. I will start by defining the environment in which this architecture is to operate, its goals, and provide an overview of ref dbrns and the architecture.
    [Show full text]
  • Curriculum Vitae Ethan L
    Curriculum Vitae Ethan L. Miller August 26, 2005 Computer Science Department PHONE: +1 (831) 345-4864 University of California, Santa Cruz FAX: +1 (815) 550-1178 1156 High Street EMAIL: [email protected] Santa Cruz, CA 95064 http://www.ethanmiller.com/ EMPLOYMENT HISTORY 2002– Associate Professor, Computer Science Department, University of California, Santa Cruz 2000–2002 Assistant Professor, Computer Science Department, University of California, Santa Cruz 1999 System Architect, Endeca, Cambridge, MA 1994–2000 Assistant Professor, Computer Science and Electrical Engineering Department, University of Maryland Baltimore County 1988–1994 Research Assistant, Computer Science Division, University of California at Berkeley 1988–1990 Teaching Associate, Computer Science Division, University of California at Berkeley 1987–1988 Software Engineer, BBN Laboratories, Cambridge, MA 1986 Summer intern, GTE Government Systems, Rockville, MD EDUCATION 1995 Ph. D., University of California at Berkeley, Computer Science 1990 M. S., University of California at Berkeley, Computer Science 1987 Sc. B., Brown University, Computer Science, magna cum laude CONSULTING 2005– Veritas (now Symantec) Corporation 2003 Hewlett Packard Laboratories 2001–2005 Expert witness, Bartlit, Beck, Herman, Palenchar & Scott 2000 Expert witness, Fish & Richardson 1998–2001 Expert witness, Hopgood, Calimafde, Judlowe & Mondolino 1998 Web site architect, Ambleside Logic HONORS 2005 Best Long Paper award, StorageSS, 2005. 2004 Best Paper award, MASCOTS 2004. 2001 Elevated to Senior Member, IEEE. 1987 William Gaston Prize for Academic Excellence (award made to top graduating students at Brown University). 1987 Elected to Sigma Xi, Brown University. Curriculum Vitae Ethan L. Miller August 26, 2005 RESEARCH FUNDING Grants 2005–2006 Institute for Scientific Data Management, Los Alamos National Laboratory, $943,022 (with Darrell Long and Scott Brandt).
    [Show full text]
  • University of California Santa Cruz
    UNIVERSITY OF CALIFORNIA SANTA CRUZ DYNAMIC PERFORMANCE ENHANCEMENT OF SCIENTIFIC NETWORKS AND SYSTEMS A dissertation submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in COMPUTER SCIENCE by Oceane Mireille Sandra Bel December 2020 The Dissertation of Oceane Mireille Sandra Bel is approved: Professor Darrell Long, Chair Professor Faisal Nawab Professor Maria Spiropulu Doctor Nathan Tallent Quentin Williams Acting Vice Provost and Dean of Graduate Studies Copyright c by Oceane Mireille Sandra Bel 2020 Table of Contents List of Symbols vi List of Figures vii List of Tables ix Abstract xi Dedication xiv Acknowledgments xv 1 Introduction 1 1.1 Optimizing system modeling with feature selection ................... 3 1.2 Applying our modeling techniques to data layouts ................... 4 1.3 Identifying modeling techniques for network routing .................. 5 1.4 Our contributions ..................................... 6 2 Motivation and background 8 2.1 The problem with data layouts . 8 2.2 Reinforcement learning versus supervised learning . 10 2.3 Optimizing system modeling via feature selection (WinnowML) . 11 2.3.1 Dimensionality reduction and feature selection . 12 2.3.2 Feature ranking . 15 2.3.3 Combined algorithms for effective feature reduction . 17 2.4 Optimizing data layouts . 18 2.4.1 Data layout strategies . 18 2.4.2 Moving the code to the data . 19 2.5 Dynamic route selection . 20 2.5.1 The LHC program performance drawbacks . 21 2.5.2 Optimizing computer network throughput using models . 21 2.5.3 The TAEP Project . 21 3 Lowering training error and overhead through feature selection 24 3.1 WinnowML Design .
    [Show full text]