Rule Driven Job-Shop Scheduling Derived From

Total Page:16

File Type:pdf, Size:1020Kb

Rule Driven Job-Shop Scheduling Derived From RULE DRIVEN JOB-SHOP SCHEDULING DERIVED FROM NEURAL NETWORKS THROUGH EXTRACTION A thesis presented to the faculty of the Fritz J. and Dolores H. Russ College of Engineering and Technology of Ohio University In partial fulfillment of the requirements for the degree Master of Science Chandrasekhar V. Ganduri August 2004 This thesis entitled RULE DRIVEN JOB-SHOP SCHEDULING DERIVED FROM NEURAL NETWORKS THROUGH EXTRACTION BY CHANDRASEKHAR V. GANDURI has been approved for the Department of Industrial and Manufacturing Systems Engineering and the Russ College of Engineering and Technology by Gary R. Weckman Associate Professor of Industrial & Manufacturing Systems Engineering R. Dennis Irwin Dean, Fritz J. and Dolores H. Russ College of Engineering and Technology GANDURI, CHANDRASEKHAR V. M.S. August 2004. Industrial and Manufacturing Systems Engineering Rule Driven Job-Shop Scheduling Derived from Neural Networks through Extraction (122 pp.) Director of Thesis: Gary Weckman This thesis focuses on the development of a rule-based scheduler, based on production rules derived from an artificial neural network performing job shop scheduling. This study constructs a hybrid intelligent model utilizing genetic algorithms for optimization and neural networks as learning tools. Genetic algorithms are used for obtaining optimal schedules and the neural network is trained on these schedules. Knowledge is extracted from the trained network as production rules using two rule extraction procedures: Validity Interval Analysis and Decision Tree Induction. The performance of this extracted rule set is compared to the performance of genetic algorithm, attribute-oriented induction data mining method, ID3 algorithm and simple dispatching rules in scheduling a test set of 6x6 scheduling instances. The capability of the rule-based scheduler in providing near optimal solutions is discussed. Approved: Gary Weckman Associate Professor of Industrial and Manufacturing Systems Engineering 4 TABLE OF CONTENTS LIST OF TABLES.............................................................................................................. 8 LIST OF FIGURES ............................................................................................................ 9 CHAPTER 1. INTRODUCTION ................................................................................ 10 1.1 Manufacturing Scheduling................................................................................ 10 1.2 Job Shop Scheduling Problem .......................................................................... 11 1.3 Previous Research............................................................................................. 13 1.4 Current Research............................................................................................... 14 1.5 Thesis Structure................................................................................................ 15 CHAPTER 2. SOFT COMPUTING METHODOLOGIES......................................... 17 2.1 What is Soft Computing?.................................................................................. 17 2.2 Genetic Algorithms........................................................................................... 19 2.2.1 Methodology of Genetic Algorithms............................................................ 20 2.2.2 Components of a Genetic Algorithm ............................................................ 21 2.2.3 Simple Genetic Algorithm Outline ............................................................... 23 2.3 Machine Learning............................................................................................. 24 2.3.1 Decision Tree Induction................................................................................ 25 2.3.2 Attribute-Oriented Induction........................................................................ 30 5 2.4 Artificial Neural Networks ............................................................................... 32 2.4.1 Neural Computation...................................................................................... 32 2.4.2 The Multi-Layer Perceptron Classifier ......................................................... 36 2.4.3 Neural-Network Training..............................................................................39 2.4.4 Generalization Considerations...................................................................... 40 2.5 Rule Extraction in Neural Networks................................................................. 42 2.5.1 The Rule-Extraction Task............................................................................. 42 2.5.2 Approaches to Rule Extraction ..................................................................... 44 2.5.3 Validity Interval Analysis............................................................................. 47 2.5.4 Extraction of Decision Tree Representations ............................................... 49 CHAPTER 3. APPROACHES TO THE JOB-SHOP SCHEDULING PROBLEM ... 52 3.1 The Classical Job Shop Scheduling Problem (JSSP)........................................ 52 3.1.1 Problem Formulation .................................................................................... 52 3.1.2 Types of Schedules ....................................................................................... 56 3.2 Review of Approaches to solve JSSP ............................................................... 58 3.2.1 Heuristics-based Approaches........................................................................ 59 3.2.2 Local Search Methods and Meta-Heuristics................................................. 61 3.2.3 Artificial Intelligence Approaches................................................................ 64 3.2.4 Machine Learning Applications.................................................................... 68 CHAPTER 4. METHODOLOGY ............................................................................... 71 6 4.1 The Learning Task ............................................................................................ 71 4.1.1 Genetic Algorithm (GA) Solutions............................................................... 71 4.1.2 Setting up the Classification Problem........................................................... 73 4.1.3 Development of a Neural Network Model.................................................... 76 4.2 Knowledge Extraction from the Neural Network Model ................................. 79 4.2.1 Decision Tree Induction................................................................................ 80 4.2.2 Propositional Rules by Validity Interval Analysis........................................ 87 CHAPTER 5. RESULTS AND DISCUSSION........................................................... 91 5.1 Performance of the 12-12-10-6 MLP Classifier ............................................... 91 5.2 Efficacy of the Rule Extraction Task................................................................ 93 5.3 Schedule Generation and Comparison.............................................................. 95 5.3.1 Statistical Analysis........................................................................................ 98 CHAPTER 6. CONCLUSIONS AND FUTURE RESEARCH ................................ 102 6.1 Conclusions..................................................................................................... 102 6.2 Future Research.............................................................................................. 104 REFERENCES ............................................................................................................... 106 APPENDIX A EVALUATION OF NN CLASSIFIERS .............................................. 113 APPENDIX B NETWORK PARAMATERS ............................................................... 114 7 APPENDIX C DECISION TREE INDUCTION DATASETS..................................... 116 APPENDIX D NN DECISION TREE EXTRACTION................................................ 118 APPENDIX E ID3 DECISION TREE INDUCTION ................................................... 119 APPENDIX F TEST SCHEDULING SCENARIOS .................................................... 122 8 LIST OF TABLES Table 2.1 Binary representations of chromosomes........................................................... 21 Table 2.2 Training set for the PlayTennis concept ........................................................... 26 Table 3.1 A 3 x 3 job-shop problem ................................................................................. 53 Table 4.1 The ft06 instance devised by Fisher and Thomson [88]................................... 72 Table 4.2 ProcessTime and RemainingTime feature classes............................................ 74 Table 4.3 MachineLoad feature classification.................................................................. 75 Table 4.4 Assignment of class labels to target feature...................................................... 76 Table 4.5 Sample data for the classification task.............................................................. 77 Table 4.6 Training parameters for the 12-12-10-6 MLP classifier................................... 79 Table 4.7 The rule set containing 48 rules (NN-Rule set) ................................................ 86 Table 5.1 Confusion matrix of the 12-12-10-6 MLP classifier ........................................ 91 Table 5.2 Comparison of classifiers.................................................................................
Recommended publications
  • Neural Networks
    AIAI andand thethe NetNet •• Buyer’sBuyer’s GuideGuide 19.1 Where Intelligent Technology Meets the Real World www.pcai.com IntelligentIntelligent Applications Applications TThehe ModernModern RoboticRobotic Also:Also: “Movement”“Movement” Agents, Business Applications, TThehe StateState ofof AIAI Today:Today: TheThe Business Intelligence, WWeb:eb: AI’sAI’s NewNew PlaygroundPlayground Computational Intelligence, Data Analysis & Mining, Robotics:Robotics: RobotsRobots ThatThat Intelligent Applications, Intelligent Tools, MimicMimic AnimalsAnimals Intelligent Tutoring, Intelligent Web Searching, TThehe NationalNational ScienceScience Neural Networks, Foundation:Foundation: EncouragingEncouraging Robotics, tthehe ResearchersResearchers ofof Speech Recognition, TTomorrowomorrow Web Based Expert Systems, Plus:Plus: AIAI andand thethe Net,Net, Training, Bookzone,Bookzone, Buyer’sBuyer’s Guide,Guide, AI Conferences, ProductProduct UUpdatespdates and more! PC AI 2 19.1 Quantities Limited Buy PC AI Back Issues 1995 1999 A Great Resource 9 #1 Intelligent Tools 13 #1 Intelligent Tools & Languages 9 #2 Fuzzy Logic / Neural Networks (Knowledge Verification) for AI Research 9 #3 Object Oriented Development 13 #2 Rule and Object Oriented 9 #4 Knowledge-Based Systems Development (Data Mining) $8.00/Issue - US 9 #5 AI Languages 13 #3 Neural Nets & Fuzzy Logic (For Us and Canadian and 9 #6 Business Applications (Searching) Foreign Postage 13 #4 Knowledge-Based Systems contact PC AI or visit the 1996 (Fuzzy Logic) 10 #1 Intelligent Applications PC AI web site) 13 #5 Data Mining (Simulation and 10 #2 Object Oriented Development Modeling) Order online at 10 #3 Neural Networks / Fuzzy Logic 13 #6 Business Applications www.pcai.com 10 #4 Knowledge-Based Systems (Machine Learning) Total amount enclosed 10 #5 Genetic Algorithm & Modeling 10 #6 Business Applications 2000 $____________.
    [Show full text]
  • Kriging Prediction with Isotropic Matérn Correlations: Robustness
    Journal of Machine Learning Research 21 (2020) 1-38 Submitted 12/19; Revised 7/20; Published 9/20 Kriging Prediction with Isotropic Mat´ernCorrelations: Robustness and Experimental Designs Rui Tuo∗y [email protected] Wm Michael Barnes '64 Department of Industrial and Systems Engineering Texas A&M University College Station, TX 77843, USA Wenjia Wang∗ [email protected] The Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong Editor: Philipp Hennig Abstract This work investigates the prediction performance of the kriging predictors. We derive some error bounds for the prediction error in terms of non-asymptotic probability under the uniform metric and Lp metrics when the spectral densities of both the true and the imposed correlation functions decay algebraically. The Mat´ernfamily is a prominent class of correlation functions of this kind. Our analysis shows that, when the smoothness of the imposed correlation function exceeds that of the true correlation function, the prediction error becomes more sensitive to the space-filling property of the design points. In particular, the kriging predictor can still reach the optimal rate of convergence, if the experimental design scheme is quasi-uniform. Lower bounds of the kriging prediction error are also derived under the uniform metric and Lp metrics. An accurate characterization of this error is obtained, when an oversmoothed correlation function and a space-filling design is used. Keywords: Computer Experiments, Uncertainty Quantification, Scattered Data Approx- imation, Space-filling Designs, Bayesian Machine Learning 1. Introduction In contemporary mathematical modeling and data analysis, we often face the challenge of reconstructing smooth functions from scattered observations.
    [Show full text]
  • Artificial Neural Networks
    ARTIFICIAL NEURAL NETWORKS: A REVIEW OF TRAINING TOOLS Darío Baptista, Fernando Morgado-Dias Madeira Interactive Technologies Institute and Centro de Competências de Ciências Exactas e da Engenharia, Universidade da Madeira Campus da Penteada, 9000-039 Funchal, Madeira, Portugal. Tel:+351 291-705150/1, Fax: +351 291-705199 Abstract: Artificial Neural Networks became a common solution for a wide variety of problems in many fields. The most frequent solution for its implementation consists of building and training the Artificial Neural Network within a computer. For implementing a network in an efficient way, the user can access a large choice of software solutions either commercial or prototypes. Choosing the most convenient solution for the application according to the network architecture, training algorithm, operating system and price can be a complex task. This paper helps the Artificial Neural Network user by providing a large list of solution available and explaining their characteristics and terms of use. The paper is confined to reporting the software products that have been developed for Artificial Neural Networks. The features considered important for this kind of software to have in order to accommodate its users effectively are specified. The development of software that implements Artificial Neural is a rapidly growing field driven by strong research interests as well as urgent practical, economical and social needs. Copyright CONTROLO2012 Keywords: Artificial Neural Networks, Training Tools, Training Algorithms, Software. 1. INTRODUCTION commercialization of new ANN tools. With the purpose to inform which tools are available at present Nowadays, in different areas, it is important to and make the choice of which tool to use, this paper analyse nonlinear data to do prediction, classification contains the description of software that has been or to build models.
    [Show full text]
  • On Prediction Properties of Kriging: Uniform Error Bounds and Robustness
    On Prediction Properties of Kriging: Uniform Error Bounds and Robustness Wenjia Wang∗1, Rui Tuoy2 and C. F. Jeff Wuz3 1The Statistical and Applied Mathematical Sciences Institute, Durham, NC 27709, USA 2Department of Industrial and Systems Engineering, Texas A&M University, College Station, TX 77843, USA 3The H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA Abstract Kriging based on Gaussian random fields is widely used in reconstructing unknown functions. The kriging method has pointwise predictive distributions which are com- putationally simple. However, in many applications one would like to predict for a range of untried points simultaneously. In this work we obtain some error bounds for the simple and universal kriging predictor under the uniform metric. It works for a scattered set of input points in an arbitrary dimension, and also covers the case where the covariance function of the Gaussian process is misspecified. These results lead to a better understanding of the rate of convergence of kriging under the Gaussian or the Matérn correlation functions, the relationship between space-filling designs and kriging models, and the robustness of the Matérn correlation functions. Keywords: Gaussian Process modeling; Uniform convergence; Space-filling designs; Radial arXiv:1710.06959v4 [math.ST] 19 Mar 2019 basis functions; Spatial statistics. ∗Wenjia Wang is a postdoctoral fellow in the Statistical and Applied Mathematical Sciences Institute, Durham, NC 27709, USA (Email: [email protected]); yRui Tuo is Assistant Professor in Department of Industrial and Systems Engineering, Texas A&M University, College Station, TX 77843, USA (Email: [email protected]); Tuo’s work is supported by NSF grant DMS 1564438 and NSFC grants 11501551, 11271355 and 11671386.
    [Show full text]
  • Rational Function Approximation
    Rational function approximation Rational function of degree N = n + m is written as p(x) p + p x + + p xn r(x) = = 0 1 ··· n q(x) q + q x + + q xm 0 1 ··· m Now we try to approximate a function f on an interval containing 0 using r(x). WLOG, we set q0 = 1, and will need to determine the N + 1 unknowns p0,..., pn, q1,..., qm. Numerical Analysis I – Xiaojing Ye, Math & Stat, Georgia State University 240 Pad´eapproximation The idea of Pad´eapproximation is to find r(x) such that f (k)(0) = r (k)(0), k = 0, 1,..., N This is an extension of Taylor series but in the rational form. i Denote the Maclaurin series expansion f (x) = i∞=0 ai x . Then i m i n i ∞ a x q x P p x f (x) r(x) = i=0 i i=0 i − i=0 i − q(x) P P P If we want f (k)(0) r (k)(0) = 0 for k = 0,..., N, we need the − numerator to have 0 as a root of multiplicity N + 1. Numerical Analysis I – Xiaojing Ye, Math & Stat, Georgia State University 241 Pad´eapproximation This turns out to be equivalent to k ai qk i = pk , k = 0, 1,..., N − Xi=0 for convenience we used convention p = = p = 0 and n+1 ··· N q = = q = 0. m+1 ··· N From these N + 1 equations, we can determine the N + 1 unknowns: p0, p1,..., pn, q1,..., qm Numerical Analysis I – Xiaojing Ye, Math & Stat, Georgia State University 242 Pad´eapproximation Example x Find the Pad´eapproximation to e− of degree 5 with n = 3 and m = 2.
    [Show full text]
  • Piecewise Linear Approximation of Streaming Time Series Data with Max-Error Guarantees
    Piecewise Linear Approximation of Streaming Time Series Data with Max-error Guarantees Ge Luo Ke Yi Siu-Wing Cheng Zhenguo Li Wei Fan Cheng He Yadong Mu HKUST Huawei Noah’s Ark Lab AT&T Labs Abstract—Given a time series S = ((x1; y1); (x2; y2);::: ) and f(xi) yi " for all i. This is because the `1=`2-error is a prescribed error bound ", the piecewise linear approximation ill-suitedj − forj ≤ online algorithms as it is a sum of errors over (PLA) problem with max-error guarantees is to construct a the entire time series. When the algorithm has no knowledge piecewise linear function f such that jf(xi) − yij ≤ " for all i. In addition, we would like to have an online algorithm that takes about the future, in particular the length of the time series n, the time series as the records arrive in a streaming fashion, and it is impossible to properly allocate the allowed error budget outputs the pieces of f on-the-fly. This problem has applications over time. Another advantage of the ` -error is that it gives wherever time series data is being continuously collected, but us a guarantee on any data record in1 the time series, while the data collection device has limited local buffer space and the ` =` -error only ensures that the “average” error is good communication bandwidth, so that the data has to be compressed 1 2 without a bound on any particular record. Admittedly, the ` - and sent back during the collection process. 1 Prior work addressed two versions of the problem, where error is sensitive to outliers, but one could remove them before either f consists of disjoint segments, or f is required to be feeding the stream to the PLA algorithm, and there is abundant a continuous piecewise linear function.
    [Show full text]
  • The Nnlib2 Library and Nnlib2rcpp R Package for Implementing Neural Networks
    The nnlib2 library and nnlib2Rcpp R package for implementing neural networks Vasilis N Nikolaidis1 1 University of Peloponnese DOI: 10.21105/joss.02876 Software • Review Summary • Repository • Archive Artificial Neural Networks (ANN or NN) are computing models used in various data-driven applications. Such systems typically consist of a large number of processing elements (or Editor: Kakia Chatsiou nodes), usually organized in layers, which exchange data via weighted connections. An ever- increasing number of different neural network models have been proposed and used. Among Reviewers: the several factors differentiating each model are the network topology, the processing and • @schnorr training methods in nodes and connections, and the sequences utilized for transferring data • @MohmedSoudy to, within and from the model etc. The software presented here is a C++ library of classes and templates for implementing neural network components and models and an R package Submitted: 22 October 2020 that allows users to instantiate and use such components from the R programming language. Published: 23 May 2021 License Authors of papers retain copyright and release the work Statement of need under a Creative Commons Attribution 4.0 International A significant number of capable, flexible, high performance tools for NN are available today, License (CC BY 4.0). including frameworks such as Tensorflow (Abadi et al., 2016) and Torch (Collobert et al., 2011), and related high level APIs including Keras (Chollet & others, 2015) and PyTorch (Paszke et al., 2019). Ready-to-use NN models are also provided by various machine learning platforms such as H2O (H2O.ai, 2020) or libraries, SNNS (Zell et al., 1994) and FANN (Nissen, 2003).
    [Show full text]
  • Neural Network FAQ, Part 1 of 7
    Neural Network FAQ, part 1 of 7: Introduction Archive-name: ai-faq/neural-nets/part1 Last-modified: 2002-05-17 URL: ftp://ftp.sas.com/pub/neural/FAQ.html Maintainer: [email protected] (Warren S. Sarle) Copyright 1997, 1998, 1999, 2000, 2001, 2002 by Warren S. Sarle, Cary, NC, USA. --------------------------------------------------------------- Additions, corrections, or improvements are always welcome. Anybody who is willing to contribute any information, please email me; if it is relevant, I will incorporate it. The monthly posting departs around the 28th of every month. --------------------------------------------------------------- This is the first of seven parts of a monthly posting to the Usenet newsgroup comp.ai.neural-nets (as well as comp.answers and news.answers, where it should be findable at any time). Its purpose is to provide basic information for individuals who are new to the field of neural networks or who are just beginning to read this group. It will help to avoid lengthy discussion of questions that often arise for beginners. SO, PLEASE, SEARCH THIS POSTING FIRST IF YOU HAVE A QUESTION and DON'T POST ANSWERS TO FAQs: POINT THE ASKER TO THIS POSTING The latest version of the FAQ is available as a hypertext document, readable by any WWW (World Wide Web) browser such as Netscape, under the URL: ftp://ftp.sas.com/pub/neural/FAQ.html. If you are reading the version of the FAQ posted in comp.ai.neural-nets, be sure to view it with a monospace font such as Courier. If you view it with a proportional font, tables and formulas will be mangled.
    [Show full text]
  • Rethinking Statistical Learning Theory: Learning Using Statistical Invariants
    Machine Learning (2019) 108:381–423 https://doi.org/10.1007/s10994-018-5742-0 Rethinking statistical learning theory: learning using statistical invariants Vladimir Vapnik1,2 · Rauf Izmailov3 Received: 2 April 2018 / Accepted: 25 June 2018 / Published online: 18 July 2018 © The Author(s) 2018 Abstract This paper introduces a new learning paradigm, called Learning Using Statistical Invariants (LUSI), which is different from the classical one. In a classical paradigm, the learning machine constructs a classification rule that minimizes the probability of expected error; it is data- driven model of learning. In the LUSI paradigm, in order to construct the desired classification function, a learning machine computes statistical invariants that are specific for the problem, and then minimizes the expected error in a way that preserves these invariants; it is thus both data- and invariant-driven learning. From a mathematical point of view, methods of the classical paradigm employ mechanisms of strong convergence of approximations to the desired function, whereas methods of the new paradigm employ both strong and weak convergence mechanisms. This can significantly increase the rate of convergence. Keywords Intelligent teacher · Privileged information · Support vector machine · Neural network · Classification · Learning theory · Regression · Conditional probability · Kernel function · Ill-Posed problem · Reproducing Kernel Hilbert space · Weak convergence Mathematics Subject Classification 68Q32 · 68T05 · 68T30 · 83C32 1 Introduction It is known that Teacher–Student interactions play an important role in human learning. An old Japanese proverb says “Better than thousand days of diligent study is one day with a great teacher.” What is it exactly that great Teachers do? This question remains unanswered.
    [Show full text]
  • Function Approximation with Mlps, Radial Basis Functions, and Support Vector Machines
    Table of Contents CHAPTER V- FUNCTION APPROXIMATION WITH MLPS, RADIAL BASIS FUNCTIONS, AND SUPPORT VECTOR MACHINES ..........................................................................................................................................3 1. INTRODUCTION................................................................................................................................4 2. FUNCTION APPROXIMATION ...........................................................................................................7 3. CHOICES FOR THE ELEMENTARY FUNCTIONS...................................................................................12 4. PROBABILISTIC INTERPRETATION OF THE MAPPINGS-NONLINEAR REGRESSION .................................23 5. TRAINING NEURAL NETWORKS FOR FUNCTION APPROXIMATION ......................................................24 6. HOW TO SELECT THE NUMBER OF BASES ........................................................................................28 7. APPLICATIONS OF RADIAL BASIS FUNCTIONS................................................................................38 8. SUPPORT VECTOR MACHINES........................................................................................................42 9. PROJECT: APPLICATIONS OF NEURAL NETWORKS AS FUNCTION APPROXIMATORS ...........................52 10. CONCLUSION ..............................................................................................................................59 CALCULATION OF THE ORTHONORMAL WEIGHTS ..................................................................................63
    [Show full text]
  • Forecasting with Artificial Neural Networks
    Forecasting with Artificial Neural Networks EVIC 2005 Tutorial Santiago de Chile, 15 December 2005 Æ slides on www.neural-forecasting.com Sven F. Crone Centre for Forecasting Department of Management Science Lancaster University Management School email: [email protected] EVIC’05 © Sven F. Crone - www.bis-lab.com Lancaster University Management School? EVIC’05 © Sven F. Crone - www.bis-lab.com What you can expect from this session … Simple back propagation algorithm [Rumelhart et al. 1982] ∂C(t pj ,o pj ) E p = C(t pj , o pj ) o pj = f j (net pj ) Δ p w ji ∝ − ∂w ji ∂C(t ,o ) ∂C(t ,o ) ∂net pj pj = pj pj pj ∂w ji ∂net pj ∂w ji ∂C(t ,o ) Æ „How to …“ on Neural δ = − pj pj pj ∂net pj Network Forecasting ∂C(t ,o ) ∂C(t ,o ) ∂o δ = − pj pj = pj pj pj pj with limited maths! ∂net pj ∂opj ∂net pj ∂o pt ' = f j (net pj ) ∂net pj ∂C(t pj ,opj ) ' δ pj = f j (net pj ) Æ CD-Start-Up Kit for ∂o pj ∂ w o ∂C(t ,o ) ∂net ∂C(t ,o ) ∑ ki pi Neural Net Forecasting pj pj pk = pj pj i ∑ ∂net ∂o ∑ ∂net ∂o k pk pj k pk pj Æ 20+ software simulators ∂C(t pj ,opj ) = ∑ wkj = − ∑δ pj wkj k ∂net pk k Æ datasets ' δ pj = f j (net pj )∑δ pj wkj Æ literature & faq k ⎧∂C(t pj ,opj ) ' ⎪ f j (net pj ) if unit j is in the output layer ⎪ ∂opj Æ slides, data & additional info on δ pj = ⎨ ⎪ ' f j (net pj )∑δ pk wpjk if unit j is in a hidden layer www.neural-forecasting.com ⎩⎪ k EVIC’05 © Sven F.
    [Show full text]
  • Uhm Ms 3812 R.Pdf
    UNIVERSITY OF HAWAI'I LIBRARY ADVANCED MARINE VEIDCLE PRODUCTS DATABASE -A PRELIMINARY DESIGN TOOL A THESIS SUBMITTED TO THE GRADUATE DMSION OF THE UNIVERSITY OF HAWAI'I IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN OCEAN ENGINEERING AUGUST 2003 By Kristen A.L.G. Woo Thesis Committee: Kwok Fai Cheung, ChaiIperson Hans-Jurgen Krock John C. Wiltshire ACKNOWLEDGEMENT I would like to thank my advisor Prof. Kwok Fai Cheung and the other committee members Prof. Hans-Jiirgen Krock and Dr. John Wiltshire for the time and effort they spent with me on this project. I would also like to thank the MHPCC staff, in particular, Mr. Scott Splean for their advice and comments on the advanced-marine-vehicle products database. Thanks are also due to Drs. Woei-Min Lin and JUll Li of SAIC for their advice on the neural network and preliminary ship design tools. I would also like to thank Mr. Yann Douyere for his help with MatLab. The work described in this thesis is a subset of the project "Environment for Design of Advanced Marine Vehicles and Operations Research" supported by the Office of Naval Research, Grant No. NOOOI4-02-1-0903. iii ABSTRACT The term advanced marine vehicle encompasses a broad category of ship designs typically referring to multihull ships such as catamarans, trimarans and SWATH (small waterplane area twin hull) ships, but also includes hovercrafts, SES (surface effect ships), hydrofoils, and advanced monohulls. This study develops an early stage design tool for advanced marine vehicles that provides principal particulars and additional parameters such as fuel capacity and propulsive power based on input ship requirements.
    [Show full text]