Curriculum Vita

Total Page:16

File Type:pdf, Size:1020Kb

Curriculum Vita Curriculum Vita Byung S. Lee CURRENT AFFILIATION Professor, Department of Computer Science (primary) and Department of Electrical and Biomedical Engineering (secondary), College of Engineering and Mathematical Sciences, University of Vermont, Burlington, VT 05405, USA Phone: (802)656-1919. Fax: (802)656-0696. Email: [email protected]. Home page: http://www.cems.uvm.edu/~bslee EDUCATION Ph.D. Electrical Engineering/Computer Science (Database Systems), Stanford University, Palo Alto, CA, January 1991. Dissertation: Instantiating Objects from a Remote Relational Database through Views. (Advisor: Gio Wiederhold, Computer Science) M.S. Electrical Engineering (Communication Systems), KAIST, Daejon, South Korea, February 1982. Thesis: Implementation of a Multi-rate Speech Digitizer. (Advisor: Chong-Kwan Un, Electrical Engineering) B.S. Electronics, Seoul National University, Seoul, South Korea, February 1980. EMPLOYMENT September 1999–present: Department of Computer Science, University of Vermont, Burlington, VT, 05405 USA. (Assistant Professor; Associate Professor; Professor) June 1998– August 1998: Computer Science Department, Dartmouth College, Hanover, NH 03755-3510 USA. (Visiting Assistant Professor) September 1993–August 1999: Graduate Programs in Software Engineering, University of St. Thomas, 2115 Summit Avenue, St. Paul, MN 55105 USA. (Assistant Professor) October 1992–August 1993: Datacom Global Communications, Inc., Princeton, NJ, 08544 USA. (Supervisor) December 1990–October 1992: Bell Communications Research (now Telcordia Technologies), 444 Hoes Lane, Piscataway, NJ 08854-4157 USA. (Member of Technical Staff) January 1990–August 1990: Hewlett-Packard Research Laboratories, 1501 Page Mill Road, Palo Alto, CA 94305 USA. (Practical Trainee) June 1986–December 1990: Computer Systems Laboratory, Stanford University, Palo Alto, CA 94305 USA. (Graduate Research Assistant) March 1980–July 1985: Technology Research Center, Goldstar Electric (now LG Communications), Anyang, South Korea. (Engineer; Principal Engineer and Team Lead) TEACHING September 1999 – present: (University of Vermont) o CS64: Discrete mathematics (undergraduate students; required). o CS121: Computer Organization (undergraduate students; required). o CS124: Data Structures (undergraduate students; required). o CS204: Database Systems (upper-level undergraduate and first year graduate students; elective). o CS224: Algorithm Design and Analysis (upper-level undergraduate and first-year graduate students; elective for undergraduate and required for graduate). o CS331/295: Advanced Database Systems (CS331 for graduate students, CS295 for undergraduate students; elective). June 1998 - August 1998: (Dartmouth College) o CS37: Computer Architecture (undergraduate students; required). September 1993 – June 1999: (University of St. Thomas) o CSIS530: Database Management System and Design (graduate students; required). o CSIS531: Database Management Concepts and Applications (graduate students; required) – Equivalent to CS530, designed for more application-oriented, less technology-oriented students. o CSIS532: Distributed Database Management Systems (graduate students; elective). o CSIS544: Object-Oriented Databases (graduate students; elective). RESEARCH PROJECTS September 1999 – present: (University of Vermont) o Trajectory analysis of signals from IOT: methods and applications – example applications include environmental watershed monitoring and smart building occupant co-work pattern analysis. o Anomaly detection over data streams: methods and applications – example applications include patient health monitoring, environmental health monitoring, and civil infrastructural health monitoring. o Online social network data analytics: inter-user influence modeling, topical influence modeling, hashtag clustering, geo- social network mining. o Event processing: event stream processing and complex event processing; causal modeling and query processing over event streams. o Stream query processing: continuous aggregation join queries over data streams, distributed join query optimization, adaptive-size reservoir sampling over data streams, spatiotemporal join processing over continuous location data stream, temporal query processing over data streams. o Approximate query evaluation using forecasting techniques: selectivity estimation in query processing; QoS-driven data aggregation in sensor networks, periodic pattern mining from streaming time series. o Predictive modeling in databases: self-tuning cost-modeling of user-defined functions; workload-aware multidimensional histograms. o XML: XML element numbering; large scale XML query processing using information retrieval techniques. o Information retrieval: combining document rankings from multiple search engines; Boolean text search query optimization. o Data mining: data clustering using a multidimensional index; mining partial periodic correlations from time series. o Temporal aggregations using a multidimensional index. o Object-relational databases: nested object selectivity; partial rollback schemes. o Web caching: time-to-Live (TTL) determination. o Approximate ad-hoc query support for scientific simulation mesh data: data model and query language; system architecture. September 1993 – June 1999: (University of St. Thomas) o Full-text indexing systems: a standard generalization markup language (SGML) benchmark testing using Oracle Context and Open Text systems. o Object-oriented databases: feasibility assessment as a repository for SGML documents; object class normalization in a schema design; refined object schema mapping from Enhanced Entity-Relationship (EER) schema. November 1992 - July 1993: (Datacom Global Communications) o Electronic Data Interchange (EDI): development of EDI systems for MS-DOS, Stratus/VOS, and UNIX operating systems; development of EDI Gateway prototype with shared memory architecture. December 1990 - October 1992: (Bell Communications Research; currently Telcordia Technologies) o Object-oriented database management systems (OODBMSs): development of a telephony benchmark suite for evaluating OODBMSs; assessing the features of SIM (a semantic DBMS) and Ithasca (an OODBMS). o Heterogeneous distributed database integration: building a uniform access interface to remote databases in Oracle, Ingres, Sybase, and RDB. August 1985 - December 1990: (Stanford University) o Remote data accesses: development of a Commonlisp language interface to Iris object-oriented database system (for Hewlett-Packard Palo Alto research center) and to Sybase DataServer (for Stanford knowledge systems project); development of an Interlisp interface to a remote SunUnify relational database server (for Stanford University medical expert systems project). o Expert systems: performance analysis of the blackboard control architecture (BB-1) – an opportunistic knowledge-based expert system – for Stanford Knowledge Systems Laboratory. March 1982 - July 1985: (Goldstar Electric Company; currently LG Communications) o Army battery automation (in collaboration with Agency for Defense Development): development of combat communication and control (C3) systems including 155mm howitzer battery firing data calculator, digital message device, and ground data unit; tactical fire control computer system. March 1980 - February 1982: (Korea Advanced Institute of Science and Technology) o Digital signal processing and voice coding: development of a multi-rate speech digitizer. STUDENT/POSTDOC RESEARCH SUPERVISION Postdoctoral research o Sang-Pil Kim, Finding Twitter Users Compatible with a New Article. October 2015 – March 2017. o Zhen He, Cost and Selectivity Modeling of User-Defined Functions. March 2003 – November 2004. Doctoral dissertation o Ali Javed, Spatiotemporal trajectory analysis from hydrological storm event data. February 2018 – present. o Saurav Acharya, Incremental Causal Network Construction over Event Streams. September 2009 – October 2014 (graduated). o Sasi Kunta, Cellular Automata Based Event Stream Processing. September 2008 – May 2010 (deceased). o Mohammed Al-Kateb, Adaptive-Size Reservoir-Based Sampling and Temporal Coalescing over Data Streams. January 2005 – May 2011 (graduated). o Tri Tran, Efficient Evaluation of Join Queries over Data Streams. July 2004 – October 2010 (graduated). Master’s thesis o Ali Javed, A Hybrid Approach to Semantic Hashtag Clustering in Social media, June 2015 – May 2016 (graduated). o Qiang (AJ) Jing, Event Detection in Binary Sensor Networks. (Co-advised with Professor Sean Wang). Spring 2005 – Spring 2007. o Dennis Fuchs, A Quantized Histogram for Multidimensional Selectivity Estimation. September 2003 – May 2004 (graduated). o Songtao Jiang, Modeling the Cost of Spatial Search Operators Using Nonparametric Regression. September 2002 – October 2003 (graduated). o Jiangyan He, Combined Relevance Ranking of Documents. April 2002 – May 2005 (graduated). o Li Chen, QoS Multicast Routing and Protection Planning in Optical Networks. (Co-advised with Professor Xue). June 2001 – May 2002 (graduated). o Vinod Kannoth, Regression-Based Cost Modeling of User-Defined Functions in Object-Relational Database Management Systems. June 2000 – May 2001 (graduated). o Kwok Yu, Object-Oriented Databases for SGML Document Repository. January 1998 – May 1999 (graduated). o Michael R. Olson, SGML Benchmark Application on Objectstore Object-Oriented DBMS. August 1995 – December 1995 (graduated). Master’s project o Jack Houk, Mobile ECG Anomaly Detection Using Long Short-Term Recurrent Neural Network, June
Recommended publications
  • What Is Your Software Worth? © Gio Wiederhold Stanford University April 2007 a Shorter Version of This Paper Was Published in the Comm
    What is Your Software Worth? © Gio Wiederhold Stanford University April 2007 A shorter version of this paper was published in the Comm. of the ACM, Vol. 49 No.9, Sep. 2006 [Wiederhold:06]. Abstract This article presents a method for valuing software, based on the income that use of that software is expected to generate in the future. It applies well known principles of intellectual property (IP) valuation, sales expectations, software maintenance, product growth, discounting to present value, and the like, always focusing on the specific issues that arise when the benefits of software are to be analyzed. An issue, not dealt with in the literature of valuing intangibles, is that software is continually upgraded. Applying depreciation schedules is the simple solution, but depreciation is taken by purchasers, and does not represent the actual diminution of the inherent IP of software at the supplier. A novel approach, which considers ongoing maintenance and its effects, is presented here. All steps of the process are presented and then integrated via a simple quantitative example. Having a quantitative model on a spreadsheet allows exploration of alternatives. As an example we evaluate a service business model alternative. Some conclusions are drawn that reflect on academic and business practice. 1. Introduction. There exists a voluminous literature on estimation of the cost of producing software, but that literature largely ignores the benefits of using that software [Boehm:81, 00]. Even software engineering management approaches termed `Earned Value Management' only deal with expenses within a development schedule [Abba:97]. While we, as software creators, believe that what we produce is valuable, we are rarely called upon to quantify its benefits [GarmusH:01].
    [Show full text]
  • Simplicity: Semantics-Sensitive Integrated Matching for Picture Libraries
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 23, NO. 9, SEPTEMBER 2001 1 SIMPLIcity: Semantics-Sensitive Integrated Matching for Picture LIbraries James Z. Wang, Member, IEEE,JiaLi,Member, IEEE, and Gio Wiederhold, Fellow, IEEE AbstractÐThe need for efficient content-based image retrieval has increased tremendously in many application areas such as biomedicine, military, commerce, education, and Web image classification and searching. We present here SIMPLIcity (Semantics- sensitive Integrated Matching for Picture LIbraries), an image retrieval system, which uses semantics classification methods, a wavelet-based approach for feature extraction, and integrated region matching based upon image segmentation. As in other region- based retrieval systems, an image is represented by a set of regions, roughly corresponding to objects, which are characterized by color, texture, shape, and location. The system classifies images into semantic categories, such as textured-nontextured, graph- photograph. Potentially, the categorization enhances retrieval by permitting semantically-adaptive searching methods and narrowing down the searching range in a database. A measure for the overall similarity between images is developed using a region-matching scheme that integrates properties of all the regions in the images. Compared with retrieval based on individual regions, the overall similarity approach 1) reduces the adverse effect of inaccurate segmentation, 2) helps to clarify the semantics of a particular region, and 3) enables a simple querying interface for region-based image retrieval systems. The application of SIMPLIcity to several databases, including a database of about 200,000 general-purpose images, has demonstrated that our system performs significantly better and faster than existing ones. The system is fairly robust to image alterations.
    [Show full text]
  • Intellectual Capital: Software Innovation and Its Role in National Economies
    Intellectual Capital: Software innovation and its role in national economies Gio Wiederhold Stanford University, Stanford CA http://i.stanford.edu/~gio 5/29/2016 Gio Wiederhold Germany 2016 1 Abstract Software has invaded all aspects of our world. It can no longer just be viewed as a fascinating technology. Software, and the products that depend on it, from watches to aircraft, social interactions, and sharing services, comprise a large fraction of modern commerce. The creators and the intellectual property they generate, exploit, and maintain comprise the intellectual capital of our hidh-technology industry, an asset that competes with the financial capital that traditional manufacturing industries rely on. I will present the flow of innovation into our national economics. Rights to profit from intellectual property are poorly documented and are easily transferred among countries. The importance of our intellectual capital is underestimated by economists and planners because the `Big Data’ they access is primarily from financial-oriented sources. As result, governmental policies to improve economic activity and the welfare of its people are often naïve and sometimes wrong. In this world computing experts have roles beyond the base technology. 5/29/2016 Gio Wiederhold Germany 2016 2 Gio’s Bio Gio Wiederhold was born in Italy, educated in Germany and The Netherlands, moving to the US in 1958. He started with numerical computing at SADTC in Holland and adapted his efforts as computing technology progressed into more areas. Gio obtained a PhD in 1976 and became a professor at Stanford University. During a three- year assignment at DARPA he initiated the Digital Library program, funding research that led, among others, to Google.
    [Show full text]
  • Externe Briefvorlage Für Alle Bereiche
    Institut für Angewandte Informatik und Formale Beschreibungsverfahren Kolloquium Angewandte Informatik Intellectual Capital: Software innovation and its role in national economies Gio Wiederhold, Professor Emeritus Stanford University Abstract: Software has invaded all aspects of our world. It can no longer just be viewed as a fascinating tech- nology. Software, and the products that depend on it, from watches to aircraft, social interactions, and sharing services, comprise a large fraction of modern commerce. The creators and the intellec- tual property they generate, exploit, and maintain comprise the intellectual capital, an asset that competes with the financial capital that traditional manufacturing industries rely on. I will present the flow of innovation into our national economics. Rights to profit from intellectual property are poorly documented and are easily transferred among countries. The importance of our intellectual capital is underestimated by economists and planners because the `Big Data’ they ac- cess is primarily from financial-oriented sources. As result, governmental policies to improve eco- nomic activity and the welfare of its people are often naïve and sometimes wrong. In this world computing experts have roles beyond the base technology. Biography: Gio Wiederhold was born in Italy, educated in Germany and the Netherlands, moving to the US in 1958. He obtained a PhD from the Univ. of California, San Francisco and became a professor at Stanford University in 1976. During a three-year assignment at DARPA (1991-1994) he initiated the Digital Library pro- gram, funding research that led, among others, to Google. After his formal retirement in 2001 he is serving as a government consultant on issues of software exports and their value.
    [Show full text]
  • A Scalable Integrated Region-Based Image Retrieval System
    A SCALABLE INTEGRATED REGION-BASED IMAGE RETRIEVAL SYSTEM Yanping Du James Z. Wang The Pennsylvania State University, University Park, PA 16801 ABSTRACT image-to-image matching can be performed. The overall similarity approach reduces the adverse effect of inaccurate In this paper, we present a scalable algorithm for index- segmentation, helps to clarify the semantics of a particular ing and retrieving images based on region segmentation. region, and enables a simple querying interface for region- The method uses statistical clustering on region features based image retrieval systems. Experiments have shown and IRM (Integrated Region Matching), a measure devel- that IRM is comparatively more effective and more robust oped to evaluate overall similarity between images incorpo- than many existing retrieval methods. Like other region- rates properties of all the regions in the images by a region- based systems, the SIMPLIcity system is a linear matching matching scheme. The algorithm has been implemented as a system. To perform a query, the system compares the query part of our experimental SIMPLIcity image retrieval system image with all images in the same semantic class. and tested on large-scale image databases of both general- In this paper, we present an enhancement to the SIM- purpose images and pathology slides. Experiments have PLIcity system for handling image libraries with million demonstrated that this technique maintains the accuracy of of images. The targeted applications include Web image the original system while reducing the matching time sig- retrieval and biomedical image retrieval. Region features nificantly. of images in the same semantic class are clustered auto- matically using a statistical clustering method.
    [Show full text]
  • Gio Wiederhold, August 2000
    COPA Testimony, Gio Wiederhold, August 2000 Testimony Prepared for the COPA (Child Online Protection Act ) Commission Gio Wiederhold Professor Computer Science Dept. and Dept.of Medicine, Stanford University www-db.stanford.edu/people/gio.html Technology demo by James Wang (PhD summer 2000) As of fall 2000 at Pennsylvania State University http://jxw.stanford.edu/cgi-bin/zwang/wipe2_show.cgi August 2000 Thank you for the privilege of presenting some statements concerning the technology relevant to the protection of minors before this commission. I will not attempt to present the full range of technological choices, threats, and candidate solutions. Instead I will focus on two points, one being specific technology in image recognition (WIPE) that we have developed at Stanford, and, second, some systematic setting of the issues in a producer-transmitter-consumer framework, which will illustrate requirements and barriers to technological solutions. I will actually start with the second aspect, but close with some suggestions for managing the system issues. Dealing with the system chain To deal effectively with any problem we have to consider complete systems and the feedback loops that enable their functioning. Applying corrections or constraints at isolated points will just cause systems that are in some sense stable, to adapt. We have to consider here the entire chain of flow from Producers via Transmitters to Consumers. In each of these categories exist a variety of groups, with different means and objectives. We often hear complaints that some technology will not solve the whole problem, but such an observation, even though nearly always true, does not mean that it cannot be applied to affect the system in some focused and desirable way.
    [Show full text]
  • Semantic Web Methods for Knowledge Management
    Semantic Web Methods for Knowledge Management Zur Erlangung des akademischen Grades eines Doktors der Wirtschaftswissenschaften (Dr. rer. pol.) an der Fakultät für Wirtschaftswissenschaften der Universität Fridericiana zu Karlsruhe (TH) genehmigte DISSERTATION von Diplom-Informatiker Stefan Decker Tag der mündlichen Prüfung: 22. Februar 2002 Referent: Prof. Dr. Rudi Studer 1. Korreferent: Prof. Dr. Peter Knauth 2. Korreferent: Prof. Dr. Gio Wiederhold 2002 Karlsruhe What information consumes is rather obvious: It consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sour- ces that might consume it. Nobel Laureate Economist Herbert A. Simon. Acknowledgement The main results of this thesis were obtained during my time at the Institute AIFB at the University of Karlsruhe, Germany. I would like to express my gratitude to my advisor, Prof. Dr. Rudi Studer, for his support, patience, and encouragement throughout my graduate studies. It is not often that one finds an advisor and colleague who always finds the time for listening to the little problems and roadblocks that unavoidably crop up in the course of performing research. I am also grateful to Prof. Dr. Peter Knauth and Prof. Gio Wiederhold who were willing to serve on my dissertation committee on a very short notice, and to Prof. Dr. Wolfried Stucky and Prof. Dr. Andreas Geyer-Schulz, who served on the examination committee. My thanks also go to all the colleagues at the Institute AIFB - in particular to Dr. Michael Erdmann - together we worked on the nitty-gritty details of the Ontobroker architecture, and Michael helped me to shape my ideas about the future of the Web.
    [Show full text]
  • Surajit Chaudhuri Speaks
    Surajit Chaudhuri Speaks Out on How Data Mining Led Him to Self-tuning Databases, How He Does Tech Transfer, Life as a Research Manager, the Fragmentation of Database Research, and More by Marianne Winslett http://research.microsoft.com/~surajitc/ Welcome to this installment of ACM SIGMOD Record’s series of interviews with distinguished members of the database community. I’m Marianne Winslett, and today we are in Istanbul, site of the ICDE 2007 conference. I have here with me Surajit Chaudhuri, who is a research area manager at Microsoft Research. Surajit’s current research interests lie in self-tuning databases, data cleaning, and text. Surajit is an ACM Fellow, and he received the SIGMOD Contributions Award in 2004. His PhD is from Stanford. So, Surajit, welcome! Surajit, your PhD work was in database theory, then you switched to query optimization, then to self-tuning database systems. What has led you to become more practical over the years? When I started at Stanford in database theory, I really liked everything I learned as a student of initially Gio Wiederhold, and then Jeff Ullman. It was a very educational experience, but I realized that I was not going to be as good as Jeff Ullman or Moshe Vardi as a database theoretician. So I started looking at more practical problems, and when I joined HP my job also demanded that. Slowly I migrated more towards systems work. I think that was good, because I don’t think I am as smart as the database theoreticians! That’s very flattering to the database theoreticians.
    [Show full text]
  • Interview with Jeff Ullman
    Gio Wiederhold Speaks Out on Moving into Academia in Mid-Career, How to Be an Effective Consultant, Why You Should Be a Program Manager at a Funding Agency, the Need for Ontology Algebra and Simulations, and More by Marianne Winslett Gio Wiederhold http://www-db.stanford.edu/people/gio.html Welcome to the second installment in the SIGMOD Record’s series of interviews with pillars of the database community. In the last issue we heard from Jeff Ullman, and upcoming issues will include conversations with David DeWitt, Avi Silberschatz, and Hector Garcia-Molina. This issue’s interview with Gio Wiederhold took place in June 2001, a few days before the festivities associated with Gio’s retirement from Stanford University. Gio has been a member of the Stanford faculty for many years, active both in computer science and in medical informatics. In the late 1980s, Gio also spent several years as a program manager at DARPA, focusing on middleware and mediators. Gio is an ACM Fellow, an IEEE Fellow and Golden Core Member, a Fellow of the American College of Medical Informatics, a recipient of the SIGMOD Contributions Award, and a past editor-in-chief of ACM Transactions on Database Systems, among many other distinguished positions. Gio has had a varied and fascinating career, and it is tempting to slip into a series of anecdotes about adventures Gio has had on the job in nations around the world. But instead, I suggest that you ask him about his adventures yourself the next time you see him on the road. Gio loves a lively story and a fun time.
    [Show full text]
  • Mediators, Concepts and Practice Gio Wiederhold
    Mediators, Concepts and Practice To appear in Studies Information Reuse and Integration In Academia And Industry Springer Verlag, Wien, 2012 Editors: Tansel Özyer, Keivan Kianmehr, Mehmet Tan, Jia Zeng Gio Wiederhold Prof. Emeritus, CS, EE, & Medicine, Stanford University [email protected] 0 Abstract Mediators are intermediary modules in large-scale information systems that link multiple sources of information to applications. They provide a means for integrating the application of encoded knowledge into information systems. Mediated systems compose autonomous data and information services, permitting growth and enable their survival in a semantically diverse and rapidly changing world. Constraints of scope are placed on mediators to assure effective and maintainable composed systems. Modularity in mediated architectures is not only a goal, but also enables the goal to be reached. Mediators focus on semantic matching, while middleware provides the essential syntactic and formatting interfaces. 1 Overview We first present the role of mediators and the architecture of mediated systems, as well as some definition for terms used throughout this exposition. Section 3 deals with mediators at a conceptual level. Section 4 presents the basic functionalities, and Section 5 presents the primary objective of mediators, information integration, including the problems of heterogeneous semantics, and the modeling of knowledge to drive integration. Section 6 points to related topics, not covered as such in earlier chapters. A final summary reviews the state of the technology, indicating where research is needed so that the concepts will support composed information systems of ever greater scale. 1.1 Architecture Mediators interpose integration and abstraction services in large-scale information systems to support applications used by decision-makers, where the scale, diversity, and complexity, of relevant data and information resources are such that the applications would be overwhelmed.
    [Show full text]
  • DOD Software Tech News, Volume 4, Number 4
    Agent Based Computing for Autonomous Intelligent Software James Hendler, Defense Advanced Research Projects Agency (DARPA) and Laura Douglass, Schafer Corporation 1. Background 1.1 Defense Advanced Research coalition operations, improved Projects Agency (DARPA) intelligence gathering, and more In the complex realm of modern Agent Based Computing timely command and control. military operations, commanders are dealing with increasingly diverse 2. Control of Agent Based missions, including operations other Systems (CoABS) than war, expeditionary missions, and controlling dangerous situations The CoABS program consists of in dynamic and uncertain three elementsthe agent grid, environments. All of these missions agent interoperability standards and are further complicated by the the scaling of agent control requirement for joint and coalition The Defense Advanced Research strategies. CoABS has developed coordination. Achieving decision Projects Agency (DARPA) has a novel tools for run-time superiority in these situations is proven record of developing interoperability among becoming increasingly difficult and revolutionary new capabilities in the heterogeneous systems and is complex. The need to gain the right area of software agents, and developing new tools to ensure information at the right time for each continues to take a leadership role in rapid, real-world system integration type of decision maker in this this field. DARPA is currently with other software agents and complex scenario is leading the focusing its research in this area on entities such as servers, databases, Department of Defense (DOD) several initiatives in Agent-Based legacy systems and sensors. toward distributed information Computing (ABC). The ABC suite systems that are managed and of programs will provide the accessed in a network-centric building blocks for understanding manner.
    [Show full text]
  • CS207 #1, 25 Sep 2009
    CS207 #10-last, 3 Dec 2010 Gio Wiederhold Gates B12 Any make-up reports submitted by 25Nov10 were marked on the sign-up sheets. All reports are due by Friday 10Dec2010. 4-Dec-10 CS207 1 Syllabus: 1. Why should software be valued? 2. Principles of valuation. Cost versus value. 3. Market value of software companies. 4. Intellectual capital and property (IP). 5. Open source software. Scope. Theory and reality 6. Life and lag of software innovation. 7. Sales expectations and discounting. 8. Alternate business models. Licensing. 9. The role of patents, copyrights, and trade secrets. 10. Offshoring [Prof. Gupta] 11. Separation of use rights from the property itself. 12. Effects of using taxhavens to house IP. 13. Growth organic and through acquistions 14. Risks when outsourcing and offshoring development. 4-Dec-10 CS207 2 Topics Covered Why should software be valued? Open source software, theory and reality. Scope. <A HREF=“http://infolab.stanford.edu/pub/gio/2010/CS207-1+background.pdf”> Intellectual capital and property (IP). Principles of valuation. <A HREF=“ http://infolab.stanford.edu/pub/gio/2010/CS207-2+valuation.pdf”> Cost versus value. Market value of software companies. Sales expectations and discounting,. <A HREF=“http://infolab.stanford.edu/pub/gio/2010/CS207-3+business.pdf”> Alternate business models. <A HREF=“ http://infolab.stanford.edu/pub/gio/2010/CS207-4+SalesModels.pdf”> Life and lag of software innovation <HREF=“http://infolab.stanford.edu/pub/gio/2010/CS207-5+Allocate&Lag.pdf”> Innovation (Tessler) <A HREF=http://infolab.stanford.edu/pub/gio/CS99I/nber_w14548.pdf> Measuring Software (Zeidman) <A HREF=http://infolab.stanford.edu/pub/gio/CS99I/ZeidmanCLOC.pdf> The role of patents, copyrights, and trade secrets.
    [Show full text]