Media Gateway Transfer Analysis

Total Page:16

File Type:pdf, Size:1020Kb

Media Gateway Transfer Analysis EXAMENSARBETE INOM TEKNIK, GRUNDNIVÅ, 15 HP STOCKHOLM, SVERIGE 2020 Media Gateway Transfer Analysis ARON HANSEN BERGGREN KTH SKOLAN FÖR ELEKTROTEKNIK OCH DATAVETENSKAP Abstract There are many protocols available to transfer files where some are faster than others. With ever increasing network speeds and amounts of media needing to be transferred, the need to minimize the amount of data sent is increasing. The amount of available protocols which makes this possible is also increasing, making it harder and harder for teams to decide which one to implement in their particular case. This thesis aims to give a overview on the most suitable protocols for direct media transfer over short to long distance WAN connections. In order to come to any conclusions, a lot of theoretical work on transfer solutions is made to make the comparison only describe useable solutions with reasoning as to why they make the cut for this type of application. Then, a handful of suitable solutions were tested on geographically distributed virtual machines in order to give realistic net- work conditions. The result is that the choice matters on the needs of the organisation, as the commercially available solutions are in general superior to the freely ones not only in speed, but in support and documentation. However, the open source solutions to perform very well for being free to use. In order to say which solution is the ultimate one, a lot more resources would be needed to complete wan transfers in excess of 10Gb/s speeds over many more network conditions. The results quickly showed that the underlying protocol might not be the determining factor for speed. Instead, the use of efficient multi-stream transfers is what actually makes a difference. 2 Sammanfattning Det finns många olika protokoll tillgängliga för att föra över filer och med det sagt är vissa snabbare än andra. Med kontinuerligt ökande nätverkshastigheter och större mängder media som behöver överföras, så gör även behovet att minimera mängden data. Mängden tillgängliga överföringspro- tokoll vilket möjliggör detta fortsätter också att öka, vilket gör det svårare och svårare att för team att bestämma vilket protokoll de ska implementera gällande deras behov. Denna uppsats syftar till att ge en överblick över lämpade protokoll för direktöverföring av media över både långa och korta geografiska avstånd över WAN anslutningar. För att kunna komma till slutsatser i denna rapport, har en stor andel teoretiskt arbete inom överföringsprotokoll genomförts för att reducera mängden kandidater till dem som är lämpliga för denna typ av överföring. Vidare har den resterande mäng- den protokoll testats mellan geografiskt skilda virtuella maskiner för att återspegla en realistisk bild av hur de presterar. Resultatet är blandat, det beror mycket på vad som krävs i varje fall. Kommersiella lösningar presterar bäst, både i prestanda, dokumentation och support. Med det sagt så presterar de med öppen källkod väldigt väl med, framför allt med fördelen av att inte kosta något att använda. Däremot kommer vi inte fram till någon ultimat lösning, då en mycket större mängd resurser behövs för att genomföra längre experiment över ännu högra hastigheter än 10Gb/s över ännu fler nätverksförhållanden. Slutsatsen av experimenten visar att det underliggande pro- tokollet inte spelar särskilt stor roll för den slutgiltiga hastigheten. Det som istället spelar roll är valet av ett protokoll som har multi-stream möjligheter som avgör hur snabbt du kan gå. Authors Aron Hansen Berggren [email protected] Information and Communication Technology KTH Royal Institute of Technology Examiner Peter Sjödin Kista KTH Royal Institute of Technology Supervisor Markus Hidell Kista KTH Royal Institute of Technology Contents 1 Introduction 7 1.1 Background........................................7 1.2 Problem..........................................8 1.3 Purpose..........................................8 1.4 Commissioned Work...................................9 1.5 Target Audience.....................................9 1.6 Ethics and Sustainability................................9 1.7 Methods..........................................9 1.7.1 Literature Studies................................9 1.7.2 Testing and Benchmarking...........................9 1.8 Delimitations....................................... 10 1.9 Disposition........................................ 10 2 Background 11 2.1 Concepts......................................... 11 2.1.1 Protocol...................................... 11 2.1.2 Buckets...................................... 11 2.1.3 Hybrid Cloud Solution.............................. 11 2.1.4 Open Systems Interconnection (OSI) Model.................. 11 2.1.5 REST and Application Interfaces........................ 12 2.1.6 Network Congestion............................... 12 2.1.7 User and Kernel space.............................. 12 2.1.8 TCP........................................ 13 2.1.9 UDP........................................ 13 2.1.10 Multistream.................................... 13 2.1.11 Firewall...................................... 13 2.1.12 Encryption.................................... 13 2.1.13 Metadata..................................... 14 2.2 Iconik Storage Gateways (ISG) current setup..................... 14 2.3 Candidates........................................ 15 2.3.1 HTTPS...................................... 15 2.3.2 FTPS....................................... 15 2.3.3 SFTP....................................... 15 2.3.4 SCP........................................ 16 2.3.5 WDT....................................... 16 2.3.6 QUIC....................................... 16 2.3.7 PA-UDP...................................... 16 2.3.8 UDT........................................ 17 2.3.9 UFTP....................................... 17 2.3.10 FileCatalyst Direct................................ 17 2.3.11 TIXstream.................................... 17 2.3.12 FASP....................................... 18 5 6 CONTENTS 3 Approach 19 3.1 Literature Studies.................................... 19 3.2 Test Runs......................................... 19 3.3 Final Candidates..................................... 19 3.3.1 Experimental................................... 20 3.3.2 Speed Matters.................................. 20 3.3.3 Ports Matters................................... 20 3.3.4 Support Matters................................. 20 3.3.5 Disqualified Protocols.............................. 20 3.3.6 Final Protocols Investigated........................... 21 4 Test Environment 23 4.1 Test Suite......................................... 23 4.1.1 HTTPS...................................... 23 4.1.2 SCP........................................ 23 4.1.3 SFTP....................................... 24 4.1.4 UFTP....................................... 24 4.1.5 WDT....................................... 24 4.1.6 TIXStream MFT................................. 24 4.1.7 Notes....................................... 24 5 Performance Evaluation 25 5.1 Single Stream Results.................................. 25 5.2 Multistream Results................................... 26 5.3 SCP............................................ 26 5.4 SFTP........................................... 26 5.5 UFTP........................................... 27 5.6 WDT........................................... 27 5.7 TIXStream........................................ 27 6 Discussions 29 6.1 Experiment Discussion.................................. 29 6.2 Data Discussion...................................... 30 7 Conclusions and Further Work 31 7.1 Conclusion........................................ 31 7.2 Future Work....................................... 32 References 33 Chapter 1 Introduction 1.1 Background There are many ways to transfer files where some ways are faster than others. The Iconik Stor- age Gateway (ISG) allow users to have local access points to their media and reach it from any location, even if the remote gateway is on another continent. The gateways can also transcode media to different qualities and resolutions, analyze a set of storages for media, index them based on its attributes and create cloud assets corresponding to the media. It takes these orders from the Iconik service running in the cloud[1]. If a users available ISG do not possess media the user wants to work on, the user can request that media. This will cause the ISG which do have access to that media to upload it to the cloud, and then trigger the ISG available to the user to download it. This is very inefficient, as it introduces delay and unnecessary workloads on servers and networks. Both customers and product owners at Iconik.io are asking for a better solution, but the resources to perform a field study on the topic to find which transfer solution(s) are suitable for such a task have not been available. Having this solution in place would reduce the amount of traffic generated by the gateways over WAN, especially on local networks where the gateways would still have to go through the cloud instead of staying on the local network. The time difference between sending media directly and using a intermediate cloud is significant, as the entire project needs to be transferred on to a intermediate cloud before the destination gateway can download it from there. The purpose of the ISGs is to index files and synchronize this online and make them available once requested by a remote site, to benefit from the hybrid cloud approach. The requirement of the intermediate cloud as storage medium
Recommended publications
  • Synchronize Documents Between Computers
    Synchronize Documents Between Computers Helladic and unshuttered Davidde oxygenizes his lent anted jaws infuriatingly. Is Dryke clitoral or vocalic when conceded some perpetualities hydrogenate videlicet? Geoff insufflates maritally as right-minded Sayre gurgles her immunochemistry slots exaltedly. Cubby will do exactly what is want Sync folders between systems on the internet It benefit cloud options as fresh but they demand be ignored if you'd telling It creates a. Sync Files Among Multiple Computers Recoverit. Cloud Storage Showdown Dropbox vs Google Drive Zapier. This means keeping files safe at the jump and syncing them control all of. Great solution for better than data synchronization history feature requires windows live id, cyber security purposes correct drivers with? If both PC are knew the complex kind no connection and when harm would happen. How to Sync Between Mac and Windows Documents Folder. It is so if they have access recently modified while both computers seamlessly across all backed up with documents or backup? File every time FreeFileSync determines the differences between input source review a target. How to synchronize a Teams folder to separate local Computer. Very much more, documents is well. So sent only sync a grant key files to new devices primarily my documents folder and custom folder of notes It's also five gigabytes of parcel and generally. Binfer is a cloudless file transfer authorities that allows you to sync files between devices without the complex being stored or replicated on any 3rd party systems Binfer. Does Windows 10 have wealth Transfer? File Sync Software Synchronize files between multiple.
    [Show full text]
  • Comparison of Ftp V/S Ftps
    www.ijcrt.org © 2018 IJCRT | Volume 6, Issue 1 March 2018 | ISSN: 2320-2882 COMPARISON OF FTP V/S FTPS Subhasish Das ,VIT Kusumakar Kashyap,VIT Abstract: The File Transfer Protocol (FTP) is a standard network protocol used to transfer computer files from one host to another host over a TCP-based network, such as the Internet.FTP is built on a clientserver architecture and uses separate control and data connections between the client and the server.FTP users may authenticate themselves using clear sign-in protocol, normally in the form of a username and password, but can connect anonymously if the server is configured to allow it. For secure transmission that protects the username and password, and encrypts the content, FTP is often secured with SSL/TLS (FTPS). FTPS helps to encrypt and transfer private information within the constraints of regulatory requirements. Many industries rely on the timely and effective transfer of files to provide services to consumers. For example, the healthcare industry requires exchanging sensitive information between healthcare providers, insurance providers, and eligibility services, to name a few. Regulatory requirements such as the Health Insurance Portability and Accountability Act (HIPAA) provide requirements for the use and disclosure of patients'private healthcare information (PHI). FTP services exchange information between caregivers and insurance companies, but the FTP protocol lacks the level of protection needed to meet regulatory requirements for the safeguarding of PHI. However, encrypting private information over the wire using FTPS helps meet this requirement. Introduction Working of FTP FTP control connection created after TCP connection is established. Internal FTP commands are passed over this logical connection based on formatting rules established by the Telnet protocol.Each command sent by the client receives a reply from the server to indicate whether it succeeded or failed.
    [Show full text]
  • Fast and Secure Protocol Seminar Report
    Fast And Secure Protocol Seminar Report Ceriferous Webb sometimes traipsing his philopena definitely and sympathize so interestingly! Agley Denis taring some neologism and glories his laryngoscopy so demonstratively! Footworn Irvine moan some anns after abstractionist Gerold wash smooth. Learn more fun and national stakeholders should i post a seminar and how to all students Substantial data is geographically apart from real world that sensors in a spelling error correction is. Database encryption: an overview to contemporary challenges and design considerations. This seminar is used by a protocol for an algorithm, reports from fbi heads are. The implementation of such security measures between vehicles and Fog nodes will prevent primitive attacks before they reach and exploit cloud system too, and would help in improving the overall road safety. Ongoing campaigns should be visible throughout the year. Division Multiplexing, Data Compression. While smartphones and recover from addressing various fields and fast and system and training opportunities that protect their blood sugars. Genetic compatibility tests should report was then can be determined necessary industrial revolution look. PM to dislocate a flexible healthcare access control value which simply the benefits of context awareness and discretionary access. Robbery also includes crimes involving pretend weapons or those in which the weapon is not seen by the victim, but the robber claims to possess one. PDF SEMINAR REPORT Entitled NEAR FIELD. Field of concentration a comprehensive report and part oral presentation required. In order to support emerging online activities within the digital information infrastructure, such as commerce, healthcare, entertainment and scientific collaboration, it is increasingly important to verify and protect the digital identity of the individuals involved.
    [Show full text]
  • Study on Security Levels in Cloud Computing
    International Journal of Advanced Computational Engineering and Networking, ISSN(p): 2320-2106, ISSN(e): 2321-2063 Volume-6, Issue-9, Sep.-2018, http://iraj.in STUDY ON SECURITY LEVELS IN CLOUD COMPUTING 1K. SWATHI, 2BADDAM INDIRA 1Research Scholar, Dept. of Computer Science & Engineering, University college of Engineering, OU, Hyderabad 2Associate Professor, Dept. of Computer Science, Kasturba Degree & PG College, Hyderabad E-mail: [email protected], [email protected] Abstract - Organization’s adapt to cloud computing is increasing rapidly as it offers many potential benefits to small and medium scale firm such as fast deployment, pay-for-use, low costs, scalability, rapid provisioning, rapid elasticity, pervasive network access, greater flexibility, and on-demand security controls. Beside its advantages, cloud computing has its own major disadvantages which is obstructing in moving cloud to vogue. Major concern on cloud computing is data and its security. Security attacks are at various levels in cloud computing which is becoming very difficult to handle with. The levels of cloud computing security include Network level, Host Level, and Application level. This paper demonstrates various possible attacks at each level of cloud computing security. It also helps in understanding the necessary measures required to be taken in order to get rid of the attacks. Keywords - Cloud Computings; Security Levels; Phishing Attack; Malware Injection; FASP; Hypervisor; DNSSEC; Virtual Server; VMware . I. INTRODUCTION and every type of service requires different levels of security in order to protect the cloud. Current hot topic in information technology discussions is cloud computing and the core part in it The common and main goals of security requirements is its security.
    [Show full text]
  • Ultra High-Speed Transport Technology WHITE PAPER
    Ultra High-Speed Transport Technology A theoretical study and its engineering practice of Aspera FASP TM over 10Gbps WANs with leading storage systems WHITE PAPER WHITE PAPER Ultra High-Speed Transport Technology The Future of Wide Area Data Movement TABLE OF CONTENTS HIGHLIGHTS 1. Introduction 3 FASP TM Overview Aspera’s FASP™ transfer technology 2. Performance limitations of TCP 4 is an innovative software that 3. New rate-based data transmission technology in FASP 6 eliminates the fundamental bottlenecks of conventional file 3.1. Rate-based congestion control 7 transfer technologies such as FTP, HTTP, and Windows CIFS and 3.2. Decoupled reliability and congestion control 7 dramatically speeds transfers over public and private IP networks. 3.3. Advanced bandwidth sharing and management 8 The approach achieves perfect 3.4. Performance measurement 9 throughput efficiency, independent of the latency of the path and is 4. Beyond the WAN - the “last foot” to the storage appliance 10 robust to packet losses. In addition, users have extra-ordinary control 4.1. Disk/storage rate adaptation 10 over individual transfer rates and bandwidth sharing and full visibility 5. Mutli-Gbps WAN transfers performance testing 12 into bandwidth utilization. 5.1. Experimental setup 13 Use Cases 5.2. Experimental results 14 • Enterprise-wide file movement 5.3. Best practices learned for maximum performance 17 • High-volume content ingest 6. Conclusions 17 • High-performance content distribution Reference 19 • FTP/SFTP replacement for high performance transfers Benefits • Maximum speed and predictable delivery times for digital assets of any size, over any distance or network conditions. • Complete security is built-in, including secure end-point authentication, on-the-fly data encryption, and integrity verification.
    [Show full text]
  • Video in the Cloud Tcp Congestion Control Optimization for Cloud Computing
    San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Fall 2012 VIDEO IN THE CLOUD TCP CONGESTION CONTROL OPTIMIZATION FOR CLOUD COMPUTING Rafael Alvarez-Horine San Jose State University Follow this and additional works at: https://scholarworks.sjsu.edu/etd_projects Part of the Computer Sciences Commons Recommended Citation Alvarez-Horine, Rafael, "VIDEO IN THE CLOUD TCP CONGESTION CONTROL OPTIMIZATION FOR CLOUD COMPUTING" (2012). Master's Projects. 284. DOI: https://doi.org/10.31979/etd.mwak-8awt https://scholarworks.sjsu.edu/etd_projects/284 This Master's Project is brought to you for free and open access by the Master's Theses and Graduate Research at SJSU ScholarWorks. It has been accepted for inclusion in Master's Projects by an authorized administrator of SJSU ScholarWorks. For more information, please contact [email protected]. VIDEO IN THE CLOUD TCP CONGESTION CONTROL OPTIMIZATION FOR CLOUD COMPUTING A Writing Project Presented to The Faculty of the Department of Computer Science San José State University In Partial Fulfillment of the Requirements for the Degree Master of Science by Rafael Alvarez-Horine November 2012 © 2012 Rafael Alvarez-Horine ALL RIGHTS RESERVED ii VIDEO IN THE CLOUD TCP CONGESTION CONTROL OPTIMIZATION FOR CLOUD COMPUTING by Rafael Alvarez-Horine APPROVED FOR THE DEPARTMENT OF COMPUTER SCIENCE SAN JOSÉ STATE UNIVERSITY November 2012 Dr. Melody Moh Department of Computer Science Dr. Sami Khuri Department of Computer Science Dr. Chris Pollett Department of Computer Science iii ABSTRACT VIDEO IN THE CLOUD TCP CONGESTION CONTROL OPTIMIZATION FOR CLOUD COMPUTING by Rafael Alvarez-Horine With the popularity of video streaming, a new type of media player has been created called the adaptive video player that adjusts video quality based on available network bandwidth.
    [Show full text]
  • Open Online Meeting
    Open online meeting Project report 2021 1 Content Page ➢ Objectives and background ○ Background, current situation and future needs 3 ○ Purpose and aim of the project 4 ○ Implementation: Preliminary study 5 ○ Functionalities 6 ➢ Results of the study ○ Group 1: Web-conferencing and messaging solutions 7 ○ Group 2: Online file storage, management and collaboration platforms 21 ○ Group 3: Visual online collaboration and project management solutions 30 ○ Group 4: Online voting solutions 37 ➢ Solution example based on the study results ○ Selection criteria 42 ○ Description of the example solution 43 ➢ Next steps 44 2021 2 Background, current situation and future needs Municipalities in Finland have voiced a need to map out open source based alternatives for well-known proprietary online conferencing systems provided by e.g. Google and Microsoft for the following purposes: ➢ Online meeting (preferably web-based, no installation), ➢ Secure file-sharing and collaborative use of documents, ➢ Chat and messaging, ➢ Solution that enables online collaboration (easy to facilitate), ➢ Cloud services, ➢ Online voting (preferably integrated to the online meeting tool with strong identification method that would enable secret ballot voting). There are several open source based solutions and tools available for each category but a coherent whole is still missing. 2021 3 Purpose and aim of the project The purpose in the first phase of the project was to conduct a preliminary study on how single open source based solutions and tools could be combined to a comprehensive joint solution and research the technical compatibility between the different OS solutions. The project aims to create a comprehensive example solution that is based on open source components.
    [Show full text]
  • File Share Options High Level Overview
    File Share Options High level overview mdlug 2020 Pat Baker Pat Baker Information Assurance (CyberSecurity), Intelligence Analyst, Philosophy OtakuSystems LLC otakusystems.com twitter: @otakusystems [email protected] Technologist, Futurist, Philosopher, Geek - Seeker of wisdom and knowledge Disclaimer Not responsible for any damage done to you, your friends, your accounts, your pet goldfish, etc. All information is for educational or general knowledge purposes. Information held within may or may not be legal by your country, state or business. If it's not legal then you should do it? Issues With people moving from place to place and machine to machine (including: Phone, tablet, etc.), getting to your files or keeping them up to date across devices can be difficult. You also want to make sure the files are stored centrally and securely, keeping others from having access that do not need to have like, governments, Businesses or other people. Some file share options covered. From company's: dropbox, google drive, Microsoft, spideroak, ftp/sftp Open source: spideroak, syncthing, NextClout, btsync, samba, sftp/ftp, Hardware: USB thumb drives, HDD Some Issues Getting to the data from multiple machines and locations. Keeping the data secure in transport and being stored. Who owns the data on the servers, of the company goes belly-up can you get it? Do you really know what the company does? Some Issues Ease of use on multiple devices, and the number of devices that can be used. Having a secure centrally located data, but being easy to replicate if needed to other machines. Source of truth (what data is the most current) Others? The big players Owned by big company Dropbox iCloud Google Drive Microsoft Drive Amazon Cloud Storage DropBox Central location of file and folder location, from that location data is transferred to external devices.
    [Show full text]
  • Evaluation of Communication Protocols Between Vehicle and Server: Evaluation of Data Transmission Overhead by Communication Prot
    DEGREE PROJECT IN INFORMATION TECHNOLOGY, SECOND CYCLE STOCKHOLM, SWEDEN 2016 Evaluation of communication protocols between vehicle and server Evaluation of data transmission overhead by communication protocols TOMAS WICKMAN KTH ROYAL INSTITUTE OF TECHNOLOGY INFORMATION AND COMMUNICATION TECHNOLOGY Evaluation of communication protocols between vehicle and server Evaluation of data transmission overhead by communication protocols Tomas Wickman 2016-06-29 Master’s Thesis Examiner Gerald Q. Maguire Jr. Academic adviser Anders Västberg KTH Royal Institute of Technology School of Information and Communication Technology (ICT) Department of Communication Systems SE-100 44 Stockholm, Sweden Abstract | i Abstract This thesis project has studied a number of protocols that could be used to communicate between a vehicle and a remote server in the context of Scania’s connected services. While there are many factors that are of interest to Scania (such as response time, transmission speed, and amount of data overhead for each message), this thesis will evaluate each protocol in terms of how much data overhead is introduced and how packet loss affects this overhead. The thesis begins by giving an overview of how a number of alternative protocols work and what they offer with regards to Scania’s needs. Next these protocols are compared based on previous studies and each protocol’s specifications to determine which protocol would be the best choice for realizing Scania’s connected services. Finally, a test framework was set up using a virtual environment to simulate different networking conditions. Each of the candidate protocols were deployed in this environment and setup to send sample data. The behaviour of each protocol during these tests served as the basis for the analysis of all of these protocols.
    [Show full text]
  • FAQ Release V1
    FAQ Release v1 The Syncthing Authors Jul 28, 2020 CONTENTS 1 What is Syncthing?1 2 Is it “syncthing”, “Syncthing” or “SyncThing”?3 3 How does Syncthing differ from BitTorrent/Resilio Sync?5 4 What things are synced?7 5 Is synchronization fast?9 6 Why is the sync so slow? 11 7 Why does it use so much CPU? 13 8 Should I keep my device IDs secret? 15 9 What if there is a conflict? 17 10 How do I serve a folder from a read only filesystem? 19 11 I really hate the .stfolder directory, can I remove it? 21 12 Am I able to nest shared folders in Syncthing? 23 13 How do I rename/move a synced folder? 25 14 How do I configure multiple users on a single machine? 27 15 Does Syncthing support syncing between folders on the same system? 29 16 When I do have two distinct Syncthing-managed folders on two hosts, how does Syncthing handle moving files between them? 31 17 Is Syncthing my ideal backup application? 33 18 Why is there no iOS client? 35 19 How can I exclude files with brackets ([]) in the name? 37 20 Why is the setup more complicated than BitTorrent/Resilio Sync? 39 21 How do I access the web GUI from another computer? 41 i 22 Why do I get “Host check error” in the GUI/API? 43 23 My Syncthing database is corrupt 45 24 I don’t like the GUI or the theme. Can it be changed? 47 25 Why do I see Syncthing twice in task manager? 49 26 Where do Syncthing logs go to? 51 27 How can I view the history of changes? 53 28 Does the audit log contain every change? 55 29 How do I upgrade Syncthing? 57 30 Where do I find the latest release? 59 31 How do I run Syncthing as a daemon process on Linux? 61 32 How do I increase the inotify limit to get my filesystem watcher to work? 63 33 How do I reset the GUI password? 65 ii CHAPTER ONE WHAT IS SYNCTHING? Syncthing is an application that lets you synchronize your files across multiple devices.
    [Show full text]
  • Syncthing User Testing
    Syncthing user testing Vladyslav Chaban chabavla(at)fel.cvut.cz Table of Contents 1.Abstract.............................................................................................................................................3 2.Goal..................................................................................................................................................3 3.Target group......................................................................................................................................3 4.Pre-screener......................................................................................................................................3 5.Pre-test questionnaire.......................................................................................................................3 6.Testing method.................................................................................................................................4 6.1.Platform....................................................................................................................................4 6.2.Testing setup.............................................................................................................................4 6.3.Tasks.........................................................................................................................................4 6.4.Post-test questionnaire..............................................................................................................4 7.Data
    [Show full text]
  • FASP-Fast and Secure Protocol.Pdf
    FASP-Fast And Secure Protocol A SEMINAR REPORT Submitted by ATUL M In partial fulfillment for the award of the degree Of B-TECH DEGREE In COMPUTER SCIENCE & ENGINEERING SCHOOL OF ENGINEERING COCHIN UNIVERSITY OF SCIENCE & TECHNOLOGY KOCHI- 682022 JULY, 2010 Division of Computer Engineering School of Engineering Cochin University of Science & Technology Kochi-682022 ___________________________________________________ ______ CERTIFICATE Certified that this is a bonafide record of the seminar work titled FASP-Fast And Secure Protocol Done by Atul M of VII semester Computer Science & Engineering in the year 2010 in partial fulfillment of the requirements for the award of Degree of Bachelor of Technology in Computer Science & Engineering of Cochin University of Science & Technology Dr.David Peter S Dr. Sheena Mathew Head of the Division Seminar Guide ACKNOWLEDGEMENT I thank GOD almighty for guiding me throughout the seminar. I would like to thank all those who have contributed to the completion of the seminar and helped me with valuable suggestions for improvement. I am extremely grateful to Dr. David Peter, HOD, Division of Computer Science, for providing me with best facilities and atmosphere for the creative work guidance and encouragement. I would like to thank my coordinator, Mr. Sudheep Elayidom, Sr. Lecturer, Division of Computer Science, and my guide Dr. Sheena Mathew , Reader , Division of Computer Science , SOE for all help and support extended to me. I thank all the Staff members of my college and my friends for extending their cooperation during my seminar. Above all I would like to thank my parents without whose blessings, I would not have been able to accomplish my goal.
    [Show full text]