Energy Consumption Prediction Using Machine Learning; a Review
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Анализ Данных В Науке И Технике Data-Driven Science and Engineering
Стивен Л. Брантон, Дж. Натан Куц Анализ данных в науке и технике Data-Driven Science and Engineering Machine Learning, Dynamical Systems, and Control Steven L. Brunton, J. Nathan Kutz Анализ данных в науке и технике Машинное обучение, динамические системы и управление Стивен Л. Брантон, Дж. Натан Куц Москва, 2021 УДК 001.5, 004.6 ББК 20, 32.97 Б87 Брантон С. Л., Куц Дж. Н. Б87 Анализ данных в науке и технике / пер. с англ. А. А. Слинкина. – М.: ДМК Пресс, 2021. – 542 с.: ил. ISBN 978-5-97060-910-1 Открытия, сделанные на основе анализа данных, совершили революцию в мо- делировании, прогнозировании поведения и управлении сложными системами. В этой книге приводятся сведения из машинного обучения, инженерной математи- ки и математической физики с целью показать, как моделирование и управление динамическими системами сочетаются с современными методами науки о дан- ных. Рассказывается о многих достижениях в области научных расчетов, которые позволяют применять управляемые данными методы к изучению разнообразных сложных систем, таких как турбулентность, науки о мозге, климатология, эпиде- миология, финансы, робототехника и автономные системы. Книга рассчитана на интересующихся студентов старших курсов и аспирантов первого года обучения инженерных и физических специальностей, в ней рассмат ривается широкий круг тем и методов на уровне от введения до недавних работ. УДК 001.5, 004.6 ББК 20, 32.97 *** Russian language edition copyright © 2021 by DMK Press. All rights reserved. Все права защищены. Любая часть этой книги не может быть воспроизведена в ка- кой бы то ни было форме и какими бы то ни было средствами без письменного разрешения владельцев авторских прав. ISBN 9781108422093 (англ.) © Steven L. -
Application of Reinforcement Learning Techniques to the Development of Flight Control Systems for Miniature Uav Rotorcraft
MACHINE LEARNING FOR INTELLIGENT CONTROL: APPLICATION OF REINFORCEMENT LEARNING TECHNIQUES TO THE DEVELOPMENT OF FLIGHT CONTROL SYSTEMS FOR MINIATURE UAV ROTORCRAFT A thesis submitted in partial fulfilment of the requirements for the Degree of Master of Engineering in Mechanical Engineering in the University of Canterbury by Edwin Hayes University of Canterbury 2013 Abstract This thesis investigates the possibility of using reinforcement learning (RL) techniques to create a flight controller for a quadrotor Micro Aerial Vehicle (MAV). A capable flight control system is a core requirement of any unmanned aerial vehicle. The challenging and diverse applications in which MAVs are destined to be used, mean that consider- able time and effort need to be put into designing and commissioning suitable flight controllers. It is proposed that reinforcement learning, a subset of machine learning, could be used to address some of the practical difficulties. While much research has delved into RL in unmanned aerial vehicle applications, this work has tended to ignore low level motion control, or been concerned only in off-line learning regimes. This thesis addresses an area in which accessible information is scarce: the performance of RL when used for on-policy motion control. Trying out a candidate algorithm on a real MAV is a simple but expensive proposition. In place of such an approach, this research details the development of a suitable simulator environment, in which a prototype controller might be evaluated. Then inquiry then proposes a possible RL-based control system, utilising the Q-learning algorithm, with an adaptive RBF-network providing function approximation. The operation of this prototypical control system is then tested in detail, to determine both the absolute level of performance which can be expected, and the effect which tuning critical parameters of the algorithm has on the functioning of the controller. -
Intelligent Flapping Wing Control Reinforcement Learning for the Delfly M.W
Intelligent Flapping Wing Control Reinforcement Learning for the DelFly M.W. Goedhart Technische Universiteit Delft Intelligent Flapping Wing Control Reinforcement Learning for the DelFly Master of Science Thesis For obtaining the degree of Master of Science in Aerospace Engineering at Delft University of Technology M.W. Goedhart June 7, 2017 Faculty of Aerospace Engineering Delft University of Technology ¢ Delft University Of Technology Department Of Control and Simulation Dated: June 7, 2017 Readers: Dr. ir. E. van Kampen Dr. ir. C. C. de Visser Dipl.-Ing. S. F.Armanini Dr. O. A. Sharpanskykh Dr. ir. Q. P.Chu Preface This reports concludes the work conducted for my thesis. In the graduation phase at the department of Control and Simulation at the faculty of Aerospace Engineering of Delft University of Technology, a phase of academic research is formalized in the course AE5310 Thesis Control and Operations. This report is intended to show the knowledge gained during the thesis phase. The main scientific contri- butions of this project are described in a scientific paper, which is included in this thesis. This is intended as a conference paper. It is aimed at readers with a background in intelligent aerospace control, and is therefore written at a more difficult level. It should be readable as a standalone document. For readers without previous knowledge of Reinforcement Learning control, a literature survey is sum- marized in this thesis. This should be sufficient to understand the paper. Such readers are advised to read Chapter6 to9 before continuing to the paper. I would like to thank dr. Erik-Jan van Kampen, dr. -
Machine Learning in a Dynamic World" (Invited), M.M.Kokar Organizer, P Roc
P. J. Antsaklis, "Learning in Control,” in Panel Discussion on "Machine Learning in a Dynamic World" (Invited), M.M.Kokar Organizer, P roc. o f t he 3 rd I EEE I ntern. S ymposium o n I ntelligent C ontro l , pp. 500- 507, Arlington, VA, Aug. 24-26, 1988. MACHINE LEARNING IN A DYNAMIC WORLD Panel Dmussion Miecsyslaw M. Kohr Panel O&m and Editor Northeastern Univdty Boston, Massachwtts Panelists: P. J. Atnsaklis, University of Notre Dame K. A. DeJong, George Mason University A. L. Meyrowitz, Office of Naval Research A. Meystel, Drexel University R. S. Michal~gi,George Mason University R. S. Sutton, GTE Laboratories Machine Learning was established as a research discipline in the lems while ignoring common-senseknowledge. How much of progresd 1970's and experienced a growth expansion in the 1980's. One of in controlling real-world systems can be made without taking into the roots of machine learning research was in Cybernetic Systems account all kinds of imprecise information? and Adaptive Control. Machine Learning has been significantly influenced by Artificial Intelligence, Cognitive Science, Computer In the following we present the views of the panelists. This pre- Science, and other disciplines; Machine Learning has developed its sentation is based on the written material submitted to the panel own research paradigms, methodologies, and a set of research ob- organizers by the panelists. The preeentation has been divided jectives different from those of control systems. In the meantime, a into four sections. The first section is devoted to the trends in the new field of Intelligent Control has emerged.