Volume 8 Issue 3
Total Page:16
File Type:pdf, Size:1020Kb
INFOLINE EDITORIAL BOARD EXECUTIVE COMMITTEE Chief Patron : Thiru A.K.Ilango Correspondent Patron : Dr. N.Raman, M.Com., M.B.A., M.Phil., Ph.D., Principal Editor in Chief : Mr. S.Muruganantham, M.Sc., M.Phil., Head of the Department STAFF ADVISOR Ms. P.Kalarani M.Sc., M.C.A., M.Phil., Assistant Professor, Department of Computer Technology and Information Technology STAFF EDITOR Ms. C.Indrani M.C.A., M.Phil., Assistant Professor, Department of Computer Technology and Information Technology STUDENT EDITORS B.Mano Pretha III B.Sc. (Computer Technology) K.Nishanthan III B.Sc. (Computer Technology) P.Deepika Rani III B.Sc. (Information Technology) R.Pradeep Rajan III B.Sc. (Information Technology) D.Harini II B.Sc. (Computer Technology) V.A.Jayendiran II B.Sc. (Computer Technology) S.Karunya II B.Sc. (Information Technology) E.Mohanraj II B.Sc. (Information Technology) S.Aiswarya I B.Sc. (Computer Technology) A.Tamilhariharan I B.Sc. (Computer Technology) P.Vijaya shree I B.Sc. (Information Technology) S.Prakash I B.Sc. (Information Technology) CONTENTS Nano Air Vehicle 1 Reinforcement Learning 2 Graphics Processing Unit (GPU) 4 The History of C Programming 5 Fog Computing 7 Big Data Analytics 9 Common Network Problems 14 Bridging the Human-Computer Interaction 15 Adaptive Security Architecture 16 Digital Platform ` 16 Mesh App and Service Architecture 17 Blockchain Technology 18 Digital Twin 20 Puzzles 21 Riddles 22 right, forward and backward, rotate clockwise NANO AIR VEHICLE (NAV) and counter-clockwise; and hover in mid-air. The artificial hummingbird using its flapping The AeroVironment Nano wings for propulsion and attitude control. It has Hummingbird or Nano Air Vehicle (NAV) is a body shaped like a real hummingbird, a a tiny, remote controlled aircraft built to wingspan of 6.3 inches (160 mm), and a total resemble and fly like a hummingbird, flying weight of 0.67 ounces (19 g) less than an developed in the United States by AA battery. This includes the systems required AeroVironment, Inc. to specifications provided for flight: batteries, motors, and by the Defense Advanced Research Projects communications systems; as well as the video Agency (DARPA). The Hummingbird is camera payload. equipped with a small video camera for surveillance and reconnaissance purposes and, Technical goals for now, operates in the air for up to 11 minutes. It can fly outdoors, or enter a doorway DARPA established flight test to investigate indoor environments. milestones for the Hummingbird to achieve and the finished prototype met all of them, and even exceeded some of these objectives: Demonstrate precision hover flight within a virtual two-meter diameter sphere for one minute. Demonstrate hover stability in a wind gust flight which required the aircraft to hover and tolerate a two-meter per second (five miles per hour) wind gust from the side, without drifting DARPA contributed $4 million to downwind more than one meter. AeroVironment since 2006 to create a Demonstrate a continuous hover prototype "hummingbird-like" aircraft for the endurance of eight minutes with Nano Air Vehicle (NAV) program. The result no external power source. was called the Nano Hummingbird which can fly at 11 miles per hour (18 km/h) and move in Fly and demonstrate controlled, three axes of motion. The aircraft can climb transition flight from hover to 11 and descend vertically, fly sideways left and 1 REINFORCEMENT LEARNING miles per hour fast forward flight and back to hover flight. Definition Demonstrate flying from Reinforcement Learning is a type of outdoors to indoors and back Machine Learning, and thereby also a branch outdoors through a normal-size of Artificial Intelligence. It allows machines doorway. and software agents to automatically determine Demonstrate flying indoors the ideal behaviour within a specific context, in "heads-down" where the pilot order to maximize its performance. Simple operates the aircraft only looking reward feedback is required for the agent to at the live video image stream learn its behaviour this is known as the from the aircraft, without looking reinforcement signal. at or hearing the aircraft directly. Fly the aircraft in hover and fast There are many different algorithms forward flight with bird-shaped that tackle this issue. As a matter of fact, body and bird-shaped wings. Reinforcement Learning is defined by a specific type of problem, and all its solutions The device is bigger and heavier than a are classed as Reinforcement Learning typical real hummingbird, but is smaller and algorithms. In the problem, an agent is lighter than the largest hummingbird varieties. supposed to decide the best action to select It could be deployed to perform reconnaissance based on his current state. When this step is and surveillance in urban environments or on repeated, the problem is known as a Markov battlefields, and might perch on windowsills or Decision Process. power lines, or enter buildings to observe its Why Reinforcement Learning? surroundings, relaying camera views back to its operator. According to DARPA, the Nano Air Motivation Vehicle's configuration will provide the Reinforcement Learning allows the warfighter with unprecedented capability for machine or software agent to learn its urban mission operations. behaviour based on feedback from the KALIKKUMAR V environment. This behaviour can be learnt once III B.Sc. (Computer Technology) and for all, or keep on adapting as time goes by. If the problem is modelled with care, some Reinforcement Learning algorithms can 2 converge to the global optimum; this is the When does Reinforcement Learning fail? ideal behaviour that maximises the reward. Limitations This automated learning scheme There are many challenges in current implies that there is little need for a human Reinforcement Learning research. Firstly, it is expert who knows about the domain of often too memory expensive to store values of application. Much less time will be spent each state, since the problems can be pretty designing a solution, since there is no need for complex. Solving this involves looking into hand-crafting complex sets of rules as with value approximation techniques, such as Expert Systems, and all that is required is Decision Trees or Neural Networks. There are someone familiar with Reinforcement many consequence of introducing these Learning. imperfect value estimations, and research tries to minimise their impact on the quality of the How does Reinforcement Learning work? solution. Technology Moreover, problems are also generally As mentioned, there are many different very modular; similar behaviours reappear solutions to the problem. The most popular, often, and modularity can be introduced to however, allow the software agent to select an avoid learning everything all over again. action that will maximise the reward in the Hierarchical approaches are common-place for long term (and not only in the immediate this, but doing this automatically is proving a future). Such algorithms are known to have challenge. Finally, due to limited perception, it infinite horizon. is often impossible to fully determine the current state. This also affects the performance In practice, this is done by learning to of the algorithm, and much work has been done estimate the value of a particular state. This to compensate this Perceptual Aliasing. estimate is adjusted over time by propagating part of the next state's reward. If all the states Who uses Reinforcement Learning? and all the actions are tried a sufficient amount Applications of times, this will allow an optimal policy to be defined; the action which maximises the value The possible applications of of the next state is picked. Reinforcement Learning are abundant, due to the genericness of the problem specification. As a matter of fact, a very large number of problems in Artificial Intelligence can be 3 fundamentally mapped to a decision process. Support for YUV color space This is a distinct advantage, since the same Hardware overlays theory can be applied to many different domain MPEG decoding specific problems with little effort. These features are designed to lessen the In practice, this ranges from controlling work of the CPU and produce faster video and robotic arms to find the most efficient motor graphics. A GPU is not only used in a PC on a combination, to robot navigation where video card or motherboard; it is also used in collision avoidance behaviour can be learnt by mobile phones, display adapters, workstations negative feedback from bumping into and game consoles. obstacles. Logic games are also well-suited to Reinforcement Learning, as they are traditionally defined as a sequence of decisions: games such as poker, back- gammom, othello, chess have been tackled more or less succesfully. S.AISWARYA I B.Sc. (Computer Technology) The first GPU was developed by GRAPHICS PROCESSING UNIT (GPU) NVidia in 1999 and called the GeForce 256. This GPU model could process 10 million A Graphics Processing Unit (GPU) is a polygons per second and had more than 22 single-chip processor primarily used to manage million transistors. The GeForce 256 was a and boost the performance of video and single-chip processor with integrated graphics. GPU features include: transform, drawing and BitBLT support, 2-D or 3-D graphics lighting effects, triangle setup/clipping and Digital output to flat panel display rendering engines. monitors GPUs became more popular as the Texture mapping demand for graphic applications increased. Application support for high-intensity Eventually, they became not just an graphics software such as AutoCAD enhancement but a necessity for optimum Rendering polygons performance of a PC. Specialized logic chips 4 now allow fast graphic and video THE HISTORY OF THE C PROGRAMMING LANGUAGE implementations. Generally the GPU is connected to the CPU and is completely C is one of the most important separate from the motherboard. The random programming languages in the history of access memory (RAM) is connected through computing.