September 2014 THE COMPLETE MAGAZINE ON OPEN SOURCE

O PE N SOU R C E FO R YOU

VOLU ME : 02 ISSU E : 12

Email : [email protected] ContentsDevelopers 26 Improve Python Code by Using a Profiler

30 Understanding the Document Object Model (DOM) in Mozilla

40 Introducing AngularJS 45 Use Bugzilla to Manage Defects in Software

48 An Introduction to Device Drivers in the Kernel 35 Experimenting with More Functions in Haskell 52 Creating Dynamic Web Portals Using Joomla and WordPress

56 Compile a GPIO Control Application and Test It On the Raspberry Pi

Admin 59 Use Pound on RHEL to Balance the Load on Web Servers 67 Boost the Performance of CloudStack with Varnish Why We Need to Handle Bounced Emails 74 Use Wireshark to 63 Detect ARP Spoofing 77 Make Your Own PBX with Asterisk REGULAR FEATURES Open Gurus 08 You Said It... 25 Editorial Calendar 80 How to Make Your USB Boot 09 Offers of the Month 100 Tips & Tricks with Multiple ISOs New Products 86 Contiki OS Connecting 10 105 FOSS Jobs Microcontrollers to the 13 FOSSBytes Internet of Things

4 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

YOUSAID IT

Online access to old issues helpful to them. We will surely look into your request and I want all the issues of OSFY from 2011, right up to the try to include the topic you have asked for in upcoming current issue. How can I get these online, and what would be issues. Keep reading OSFY and continue sending us your the cost? feedback! —c kiran kumar; [email protected] Annual subscription I’ve bought the July 2014 issue of OSFY and I loved ED: It feels great to know that we have such valuable readers. it. I want the latest version of the Ubuntu 14.04 LTS and the Thank you, Kiran, for bringing this request to us. You can avail programming tools (JDK and other tools for C, C+, Java and all the back issues of Open Source For You in e-zine format from Python). Also, how can I subscribe to your magazine for one www.ezines.efyindia.com year and can I get it at my village (address enclosed)? —Parveen Kumar; Request for a sample issue [email protected] I am with a company called Relia-Tech, which is a brick- and-mortar computer service company. We are interested in ED: Thank you for the compliments. We're glad to know that subscribing to your magazine. Would you be willing to send us a you enjoy reading our magazine. We will definitely look into magazine to check out before we commit to anything? your request. Also, I am forwarding your query regarding —Lindsay Steele; subscribing to the magazine to the concerned team. Please [email protected] feel free to get back to us in case of any other suggestions or questions. We're always happy to help. ED: Thanks for your mail. You can visit our website www.ezine. lfymag.com and access our sample issue. Availability of OSFY in your city I want to purchase Open Source For You for the A ‘thank-you’ and a request for more help library in my organisation but I am unable to find copies I began reading your magazine in my college library and in the city I live in (Jabalpur in Madhya Pradesh). I cannot thought of offering some feedback. go in for the subscription as well. Please give me the name I was facing a problem with Oracle Virtual Box, but after of the distributor or dealer in my city through whom I can reading an article on the topic in OSFY, the task became so easy. purchase the magazine. Thanks for the wonderful help. I am also trying to set up —Gaurav Singh; my local (LAN-based) GIT server. I have no idea how to [email protected] set it up. I have worked a little with GitHub. I do wish your magazine would feature content on this topic in upcoming ED: We have a website where you can locate the nearest store editions. in your city that supplies Open Source For You. Do log on —Abhinav Ambure; to http://ezine.lfymag.com/listwholeseller.asp. You will find [email protected] there are two dealers of the magazine in your city: Sahu News Agency (Sanjay Sahu, Ph: 09301201157) and Janta News ED: Thank you so much for your valuable feedback. We Agency (Harish, Ph: 09039675118). They can ensure regular really value our readers and are glad that our content proves supply of the magazine to your organisation.

Please send your comments Share Your or suggestions to: The Editor, Open Source For You, D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020, Phone: 011-26810601/02/03, Fax: 011-26817563, Email: [email protected]

8 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | PB offerS THE monTH 2000 One Rupees month Coupon free (Free Trial Coupon) Free Dedicated Server Hosting No condition attached for trial of our for one month cloud platform Subscribe for our Annual Package of Dedicated Server Hosting & enjoy one month free service Enjoy & Please share Feedback at [email protected] Hurry! th Hurry! th For more information, call us Offer valid till 30 Offer valid till 30 on 1800-209-3006/ +91-253-6636500 September 2014! For more information, call us on September 2014! 1800-212-2022 / +91-120-666-7718

www.esds.co.in www.cloudoye.com

Get 10% 35% discount off & more “Do not wait! Be a part of Reseller package special offer ! the winning team”

Free Dedicated hosting/VPS for one Get 35% off on course fees and if you appear month. Subscribe for annual package for two Red Hat exams, the second shot is free of Dedicated hosting/VPS and get Hurry! th one month FREE th Hurry! Contact us @ 98409 82184/85 or Offer valid till 30 Contact us at 09841073179 Offer valid till 30 Write to [email protected] September 2014! or Write to [email protected] September 2014!

www.space2host.com www.vectratech.in

Get PACKWEB PACK WEB Get 25% HOSTING 12 Months P roX Free Off Time to go PRO now Pay Annually & get 12 Month Free Considering VPS or a Dedicated Services on Dedicated Server Hosting Server? Save Big !!! And go Subscribe for the Annual Packages of with our ProX Plans Dedicated Server Hosting & Enjoy Next 25% Off on ProX Plans - Ideal for running 12 Months Free Services th Hurry! th Hurry! High Traffic or E-Commerce Website Coupon Code : OSFY2014 Offer valid till 30 Offer valid till 30 September 2014! For more information, call us on September 2014! Contact us at 98769-44977 or 1800-212-2022 / +91-120-666-7777 Write to [email protected] www.goforhosting.com www.prox.packwebhosting.com

Pay the most EMBEDDED SOFTWARE DEVELOPMENT competitive COURSES AND WORKSHOPS Fee Embedded RTOS -Architecture, Internals To advertise here, contact and Programming - on ARM platform Omar on +91-995 888 1862 or Date: 20-21 Sept’ 2014 ( 2 days program) Faculty: Mr. Babu Krishnamurthy 011-26810601/02/03 or COURSE Visiting Faculty / CDAC/ ACTS with 18 years Write to [email protected] FEE: of Industry and Faculty Experience RS.5620/- (all inclusive) Contact us at +91-98453-65845 or Write to [email protected] www.opensourceforu.com

FOSSBYTES Powered by www.efytimes.com

Ubuntu 14.04.1 LTS is out VLC 2.1.5 has been The Ubuntu 14.04 LTS has released been around for quite some VideoLAN has announced the time now and most people release of the final update in the must have upgraded it. 2.1.x series of its popular open Another smaller update is source, cross-platform media player ready – 14.04.1. Canonical and streaming media has announced that this server: the VLC media Ubuntu update fixes many player. VLC 2.1.5 is bugs and includes security now available for updates. There is also a list of bugs and other updates in Ubuntu 14.04.1 that download and you might want to have a look at, in order to see the scope of this update. If you installation on haven’t upgraded to 14.04.1 yet, do so as soon as possible. It is a worthy upgrade Windows, Mac and if you use an older version of Ubuntu. Linux operating systems. Notably, the next big release for the VLC Android Device Manager makes it easier to media player will be that of the search for lost phones! 2.2.x branch. A careful look at the Google has created an update in Android Device change log reveals that although the Manager that will help the device’s users better VLC 2.1.5 update has been released security. This latest version is called 1.3.8. It will across multiple platforms, the most help add a phone number in the remote locking noticeable improvements are for OS screen, and the ‘lock screen’ password can also X users. Others could consider it as a be changed. An optional message can also be set minor update. up. If the phone number is added, then a big green For OS X users, VLC 2.1.5 button will appear on the lock screen saying ‘Call brings about additional stability owner’. If the lost phone is found by someone, to the Qtsound capture module as then the owner can be easily contacted. Earlier, well as improved support for Reti. only a message could be added by the users. The Other notable changes (for the OS call-back number can be set up through the Android X platform) include compilation Device Manager app as well as the Web interface, fixes for OS/2 operating systems. if another Android device is not present. Both Also, MP3 file conversions will no these message and call-back features are optional, longer be renamed ‘.raw’ under the though. But it’s highly recommended that these features are used so that a lost Qt interface following the update. A phone can be easily found. few decoder fixes will now benefit DxVA2 sample decoding, MAD Ubuntu’s Amazon shopping feature complies resistance in broken MP3 streams with UK Data Protection Act and PGS alignment tweaks for MKV. The independent body investigating In terms of security, the new release the implementation of Ubuntu’s Unity comes with fixes for GNU TLS and Shopping Lens feature and its compliance libpng as well. One should remember with the UK Data Protection Act (DPA) of that VLC is a portable, free and open 1998 has found no instances of Canonical source, cross-platform media player being in breach of the act. Ubuntu’s and streaming media server written by controversial ‘Amazon shopping’ feature the VideoLAN project that supports has been found to be compliant with many audio and video compression relevant data protection and privacy laws methods and file formats. It comes in the UK, something that was checked in response to a complaint filed by blogger with a large number of free decoding Luis de Sousa last year. Notably, the feature sends out queries made in the Dash to an and encoding libraries, thereby intermediary Canonical server, which sends it forward to Amazon. The e-commerce eliminating the need of finding or giant then returns product suggestions matching the query back to the Dash. The calibrating proprietary plugins. feature also sends across non-identifiable location data out in the process.

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 13 FOSSBYTES

Here’s what’s new in Linux 3.16 The founder of Linux, Linus Torvalds, Calendar of forthcoming events announced the release of the stable build of Name, Date and Venue Description Contact Details and Website Linux 3.16 recently. This version is known 4th Annual Datacenter The event aims to assist the community in Praveen Nair; Email: Praveen.nair@ as ‘Shuffling Zombie Juror’ for developers. Dynamics Converged. the data centre domain by exchanging ideas, datacenterdynamics.com; Ph: +91 There are a host of improvements and new September 18, 2014; accessing market knowledge and launching 9820003158; Website: Bengaluru new initiatives. http://www.datacenterdynamics.com/ features in this new stable build of Linux. CIOs and senior IT executives from across the These include new and improved drivers, Gartner Symposium IT Xpo, world will gather at this event, which offers Website: October 14-17, 2014; Grand and some complex integral improvements talks and workshops on new ideas and strate- http://www.gartner.com Hyatt, Goa like a unified control hierarchy. This new gies in the IT industry. Linux 3.16 stable version will be ideal for Open Source India, Asia’s premier open source conference that Omar Farooq; Email: omar.farooq@ the Ubuntu Linux Kernel 14.10. LTS version November 7-8, 2014; aims to nurture and promote the open source efy.in; Ph: 09958881862 users will get this update once the 14.10 NIMHANS Center, Bengaluru ecosystem across the sub-continent. http://www.osidays.com kernel is released. CeBit This is one of the world’s leading business IT Website: November 12-14, 2014; events, and offers a combination of services http://www.cebit-india.com/ Shutter 0.92 for Linux released BIEC, Bengaluru and benefits that will strengthen the Indian IT and fixes a number of bugs and ITES markets. Users have had some trouble using the 5th Annual Datacenter The event aims to assist the community in Praveen Nair; Email: Praveen.nair@ popular Shutter screenshot tool for Linux Dynamics Converged; the datacentre domain by exchanging ideas, datacenterdynamics.com; Ph: +91 December 9, 2014; Riyadh accessing market knowledge and launching 9820003158; Website: owing to the many irritating bugs and new initiatives. http://www.datacenterdynamics.com/ stability issues that came along. But they are Hostingconindia This event will be attended by Web hosting Website: in for a pleasant surprise as developers have December 12-13, 2014; companies, Web design companies, domain http://www.hostingcon.com/ now released a new bug fix for the tool that NCPA, Jamshedji Bhabha and hosting resellers, ISPs and SMBs from contact-us/ aims to address some of its more prominent Theatre, Mumbai across the world. issues. The new bug fix—Shutter 0.92—is now available for download for the Linux According to Sousa, the Shopping Lens implementation “…contravened a platform and a number of stability issues 1995 EU Directive on the protection of users’ personal data.” Sousa had provided have been dealt with for good. a number of instances to put forward his point. Initially, Sousa began by reaching out to Canonical for clarification but to no avail. He was finally forced to file a Open source community irked complaint with the Information Commissioner’s Office regarding his security by broken Linux kernel patches concerns. Finally, the ICO responded to Sousa’s need for clarification by clearly One of the many fine threads that bind the stating that the Shopping Lens feature complies with the DPA (Data Protection Act) open source community is avid participation very well and in no way breaches users’ privacy. and cooperation between developers across the globe, with the common goal of improving Oracle launches Solaris 11.2 with OpenStack support the Linux kernel. However, not everyone is Oracle Corp recently launched the latest actually trying to help out there, as recent version of its Solaris enterprise UNIX happenings suggest. Trolls exist even in the platform: Solaris 11.2. Notably, this new Linux community, and one that has managed version was in beta since April. The to make a big impression is Nick Krause. latest release comes with several key Krause’s recent antics have led to significant enhancements—the support for OpenStack bouts of frustration among Linux kernel as well as software-defined networking maintainers. Krause continuously tries to get (SDN). Additionally, there are various broken patches past the maintainers—only security, performance and compliance his goals are not very clear at the moment. enhancements introduced in Oracle’s Many developers believe that Krause aims to new release. Solaris 11.2 comes with OpenStack integration, which is perhaps its damage the Linux kernel. While that might most crucial enhancement. The latest version runs the most recent version of the be a distant dream for him (at least for now), popular toolbox for building clouds: OpenStack Havana. Meanwhile, the inclusion he has managed to irk quite a lot of people, of software-defined networking (SDN) support is seen as Oracle’s ongoing effort to slowing down the whole development process transform its Exalogic Elastic Cloud into one-stop data centres. Until now, Exalogic because of the need to keep fixing broken boxes were being increasingly used in the form of massive servers or for transaction patches introduced by him. processing. They were therefore not fulfilling their real purpose, which is to work

14 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com

FOSSBYTES

as cloud-hosting systems. However, with SDN support added, Oracle is aiming Android-x86 4.4 R1 to change all this. Oracle plans to directly take on network equipment makers Linux distro available for like Cisco, Hewlett-Packard and Brocade with the introduction of Solaris 11.2. download and testing Enterprises using Solaris can now simply purchase a handful of Solaris boxes and The team behind Android-x86 run their mission-critical clouds. In addition, they can also use bits of OpenStack recently launched version 4.4 R1 of without acquiring additional hardware. the port of the Android OS designed specifically for the x86 platform. Canonical launches Ubuntu 12.04.5 LTS Android-x86 4.4 KitKat is now Marking its fifth point release, Canonical has announced that Ubuntu 12.04.5 LTS available for download and testing is available for download and installation. Ubuntu 12.04 on the Linux platform for your LTS was first released back in April 2012. Canonical PC. Android is actually based on a will continue supporting the LTS until 2017 with regular modified Linux kernel, with many updates from time to time. Also, this is the first major believing it to be a stand alone Linux release for Canonical since the debut of Ubuntu 14.04 distribution in its own right. With that LTS earlier this year. The most notable improvement said, developers have managed to in the new release is the inclusion of an updated kernel tweak Android to make it port to the (3.13) and X.org stack. Both of these have been traded PC for the x86 platforms; that’s what from Ubuntu 14.04 LTS. The new release is out now for Android-x86 is really all about. desktop, server, cloud and core products, as well as other flavours of Ubuntu with long-term support. In addition, the new release also comes Linux Mint Debian edition with ‘security updates and corrections for other high-impact bugs, with a focus to switch from snapshot on maintaining stability and compatibility with Ubuntu 12.04 LTS.’ Meanwhile, cycle to Debian stable Kubuntu 12.04.5 LTS, Edubuntu 12.04.5 LTS and Ubuntu Studio 12.04.5 LTS are package base also available for download and install. Storm Energy’s SunSniffer charmed by Raspberry Pi! The humble Raspberry Pi single board computer is indeed going places, receiving critical acclaim for, well, being downright awesome. The latest to be smitten by it is the German company, Storm Energy, which builds products like SunSniffer, a solar plant monitoring system. The SunSniffer system is designed to monitor photovoltaic (PV) solar power installations of varied sizes. The company has now upgraded the system to a Linux- The team behind Linux Mint has based platform running on a Raspberry Pi. In addition to this, the decided to let go of the current latest SunSniffer version also comes with a custom expansion snapshot cycle in the Debian edition board and customised Linux OS. The SunSniffer is IP65-rated, for the and instead and the new Connection Box’s custom Raspberry Pi expansion switch over to a Debian stable board comes with five RS-485 ports and eight analogue/digital package base. The current Linux I/O interfaces to help simultaneously monitor a wide variety Mint editions are based on Ubuntu of solar inverters (Refusol, Huawei and Kostal, among others). In short, the new system and the team is most likely to stick can remotely control solar inverters via a radio ripple control receiver, as against earlier to that for at least a couple of years. versions where users could only monitor their data. The team recently launched the The Raspberry Pi-laden SunSniffer also offers SSL-encryption and optional latest iteration of Linux Mint, a.k.a. integrated anti-theft protection. ‘Qiana’. Both the Cinnamon and Mate versions are now available for Italian city of Turin switching to open source technology download with the KDE and XFCE In a recent development, the Italian city of Turin is considering ditching all versions expected to come out soon. Microsoft products in favour of open source alternatives. The move is directly Meanwhile, it has been announced aimed at cutting government costs, while not compromising on functionality. If at that the next three Linux Mint all Turin gets rid of all proprietary software, it will go on to become one of the first releases would also, in all probability, Italian ‘open source cities’ and save itself at least a whopping six million Euros. A be based on Ubuntu 14.04 LTS. report suggests that as many as 8,300 computers of the local administration in Turin will soon have Ubuntu under the hood and will be shipped with the Mozilla Firefox

16 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com FOSSBYTES

Web browser and OpenOffice—the two joys of the open source world. The local Khronos releases OpenGL NG government has argued that a large amount of money is spent on buying licences in The Khronos Group recently announced case of proprietary software, wasting a lot of the local tax payers’ money. Therefore, the release of the latest iteration of a decision to drop Microsoft in favour of cost-effective open source alternatives OpenGL (the oldest high-level 3D seems to be a viable option. graphics API still in popular use). Although OpenGL 4.5 is a noteworthy LibreOffice coming to Android release in its own right, the Group’s LibreOffice needs no introduction. The Document Foundation’s popular open second major release in the next source office suite is widely used by millions of people across the globe. Therefore, generation OpenGL initiative is garnering news that the suite could soon be widespread appreciation. While OpenGL launched on Android is something to 4.5 is what some might call a fairly watch out for. You heard that right! A standard annual OpenGL update, OpenGL new report by Tech Republic suggests NG is a complete rebuild of the OpenGL that the Document Foundation is API, designed with the idea of building an currently on a rigorous workout to entirely new version of OpenGL. This new make this happen. However, as things version will have a significantly reduced stand, there is still some time before that happens for real. Even as the Document overhead owing to the removal of a lot Foundation came out with the first Release Candidate (RC) version of the upcoming of abstraction. Also, it will do away with LibreOffice 4.2.5 recently (it has been quite consistent in updating its stable version the major inefficiencies of older versions on a timely basis), work is on to make LibreOffice available for Google’s much when working at a low level with the bare loved Android platform as well, the report says. The buzz is that developers back metal GPU hardware. home are currently talking about (and working at) getting the file size right, that is, Being a very high-level API, earlier something well below the Google limit. Until they are able to do that, LibreOffice versions of OpenGL made it hard to for Android is a distant dream, sadly. efficiently run code on the GPU directly. However, as and when this happens, LibreOffice would be in direct competition While this didn’t matter so much earlier, with Google Docs. Since there is a genuine need for Open Document Format (ODF) now things have changed. Fuelled by support in Android, the release might just be what the doctor ordered for many users. more mature GPUs, developers today This is more of a rumour at the moment, and things will get clearer in time. There is tend to ask for graphics APIs that allow no official word from either Google or the Document Foundation about this, but we them to get much closer to the bare will keep you posted on developments. The recent release – the LibreOffice 4.2.5 metal. The next generation OpenGL RC1—meanwhile tries to curb many key bugs that plagued the last 4.2.4 final release. initiative is directed at developers who This, in turn, has improved its usability and stability to a significant extent. are looking to improve performance and reduce overhead. RHEL 6.6 beta is released; draws major inspiration from RHEL 7 Just so RHEL 6.x users (who wish to continue with this branch of the distribution for Dropbox’s updated Android a bit longer) don’t feel left out, Red Hat has launched a beta release of its Red Hat App offers improved features Enterprise Linux 6.6 (RHEL 6.6) platform. Taking much of its inspiration from the A major update has been announced recently released RHEL 7, the move is directed towards RHEL 6.x users so that they by Dropbox in connection with its benefit from new platform features. At the same time, it comes with some real cool official Android app, and is available features that are quite independent of RHEL 7 and which make 6.6 beta stand out at . This new update on its own merits. Red Hat offers Application Binary Interface (ABI) compatibility carries version number 2.4.3 and for RHEL for a period of ten years, so technically speaking, it cannot drastically comes with a lot of improved features. change major elements of an in-production release. Quite simply put, it can’t and As the Google Play listing suggests, won’t change an in-production release in a way that could alter stability or existing this new Dropbox version supports in- compatibility. This would eventually mean that the new release on offer cannot go app previews of Word, PowerPoint and much against the tide with respect to RHEL 6. Although the feature list for RHEL PDF files. A better search experience is 6.6 beta ties in closely with the feature list of the major release (6.0), it doesn’t also offered in this new version, which mean RHEL 6.6 beta is simply old wine served in a new bottle. It does manage to enables tracking of recent queries, and introduce some key improvements for RHEL 6.x users. To begin with, RHEL 6.6 suggestions are also displayed. One beta includes some features that were first introduced with RHEL 7, the most notable can also search in specific folders from being Performance Co-Pilot (PCP). The new beta release will also offer RHEL 6.x now onwards. users more integrated Remote Direct Memory Access (RDMA) capabilities.

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 17 Buyers’ Guide Motherboards The Lifeline of Your Desktop If you are a gamer, or like to customise your PC and build it from scratch, the motherboard is what you require to link all the important and key components together. Let’s find out how to select the best desktop motherboards.

he central processing unit (CPU) can be considered to CPU socket be the brain of a system or a PC in layman’s language, The central processing unit is the key component of a motherboard T but it still needs a ‘nervous system’ to be connected and its performance is primarily determined by the kind of with all the other components in your PC. A motherboard processor it is designed to hold. The CPU socket can be defined plays this role, as all the components are attached to it and as an electrical component that connects or attaches to the to each other with the help of this board. It can be defined motherboard and is designed to house a microprocessor. So, as a PCB (printed circuit board) that has the capability of when you’re buying a motherboard, you should look for a CPU expanding. As the name suggests, a motherboard is believed socket that is compatible with the CPU you have planned to use. to be the ‘mother’ of all the components attached in it, Most of the time, motherboards use one of the following five including network cards, sound cards, hard drives, TV tuner sockets -- LGA1155, LGA2011, AM3, AM3+ and FM1. Some cards, slots, etc. It holds the most significant sub-systems— of the sockets are backward compatible and some of the chips the processor along with other important components. A are interchangeable. Once you opt for a motherboard, you will be motherboard is found in all electronics devices like TVs, limited to using the processors that offer similar specifications. washing machines and other embedded systems. Since it provides the electrical connections through which other Form factor components are connected and linked with each other, it needs A motherboard’s capabilities are broadly determined by its the most attention. It hosts other devices and subsystems and shape, size and how much it can be expanded – these aspects also contains the central processing unit, unlike the backplane. are known as form factors. Although there is no fixed design or There are quite a lot of companies that deal with form for motherboards, and they are available in many variations, motherboards and Simmtronics is one among the leading players. two form factors have always been the favourites -- ATX and According to Dr Inderjeet Sabbrawal, chairman, Simmtronics, microATX. The ATX motherboard measures around 305cm “Simmtronics has been one of the exclusive manufacturers of x 23cm (12 inch x 9 inch) and offers the highest number of motherboards in the hardware industry over the last 20 years. We expansion slots, RAM bays and data connectors. MicroATX strongly believe in creativity, innovation and R&D. Currently, we motherboards measure 24.38cm x 24.38cm (9.6 x 9.6 inch) and are fulfilling our commitment to provide the latest mainstream have fewer expansion slots, RAM bays and other components. motherboards. At Simmtronics, the quality of the motherboards The form factor of a motherboard can be decided according to is strictly controlled. At present, the market is not growing.… what purpose the motherboard is expected to serve. India still has a varied market for older generation models as well as the latest models of motherboards.” RAM bays Random access memory (RAM) is considered the most important Factors to consider while buying a motherboard workspace in a motherboard, where data is processed even after In a desktop, several essential units and components being removed from the hard disk drive or solid state drive. The are attached directly to the motherboard, such as the efficiency of your PC directly depends on the speed and size of your microprocessor, main memory, etc. Other components, such RAM. The more space you have on your RAM, the more efficient as the external storage controllers for sound and video display your computing will be. But it’s no use having a RAM with greater and various peripheral devices, are attached to it through efficiency than your motherboard can support, as that will be just a slots, plug-in cards or cables. There are a number of factors to waste of the extra potential. Neither can you have RAM with lesser keep in mind while buying a motherboard, and these depend efficiency than the motherboard, as then the PC will not work well on the specific requirements. Linux is slowly taking over the due to the bottlenecks caused by mismatched capabilities. Choosing PC world and, hence, people now look for Linux-supported the motherboard which supports just the right RAM is vital. motherboards. As a result, almost every motherboard now Apart from these factors, there are many others to consider before supports Linux. The many factors to keep in mind when selecting a motherboard. These include the audio system, display, buying a Linux-supported motherboard are discussed below. LAN support, expansion capabilities and peripheral interfaces.

18 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com Buyers’ Guide A few desktop motherboards with the latest chipsets Intel: DZ87KLT-75K motherboard ƒƒ Supported CPU: Fourth generation Intel Core i7 processor, Intel Core i5 processor and other Intel processors in the LGA1150 package ƒƒ Memory supported: 32GB of system memory, dual channel DDR3 2400+ MHz, DDR3 1600/1333 MHz ƒƒ Form factor: ATX form factor

Asus: Z87-K motherboard ƒƒ Supported CPU: Fourth generation Intel Core i7 processor, Intel Core i5 processor and other Intel processors ƒƒ Memory supported: Dual channel memory architecture supports Intel XMP ƒƒ Form factor: ATX form factor

Simmtronics SIMM-INT H61 (V3) motherboard ƒƒ CPU supported: Intel Core2nd and Core3rd Generation i7/i5/i3/Pentium/Celeron ƒƒ Main memory supported: Dual channel DDR3 1333/1066 ƒƒ BIOS: 1×32MB Flash ROM ƒƒ Connectors: 1×4-pin ATX 12V power connector ƒƒ Chipset: Intel H61 (B3 Version)

Gigabyte Technology: GA-Z87X-OC motherboard ƒƒ CPU supported: Fourth generation Intel Core i7 processor, Intel Core i5 processor and other Intel processors ƒƒ Memory supported: Supports DDR3 3000 ƒƒ Form factor: MicroATX

By: Manvi Saxena The author is a part of the editorial team at EFY.

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 19 CODE SPORT Sandya Mannarswamy In this month’s column, we continue our discussion on natural language processing.

or the past few months, we have been discussing The software design phase also produces a number of information retrieval and natural language processing, SE artifacts such as the design document, design models Fas well as the algorithms associated with them. This in the form of UML documents, etc, which also can be month, we continue our discussion on natural language mined for information. Design documents can be analysed processing (NLP) and look at how NLP can be applied to generate automatic test cases in order to test the final in the field of software engineering. Given one or many product. During the development and maintenance phases, text documents, NLP techniques can be applied to extract a number of textual artifacts are generated. information from the text documents. The software itself can be considered as a textual document. Apart from engineering (SE) lifecycle gives rise to a number of textual source code, source code control system logs such as SVN/ documents, to which NLP can be applied. GIT logs, Bugzilla defect reports, developers’ mailing lists, So what are the software artifacts that arise in SE? field reports, crash reports, etc, are the various SE artifacts to During the requirements phase, a requirements document which text mining can be applied. is an important textual artifact. This specifies the expected Various types of text analysis techniques can be applied behaviour of the software product being designed, in terms to SE artifacts. One popular method is duplicate or similar of its functionality, user interface, performance, etc. It is document detection. This technique can be applied to important that the requirements being specified are clear find out duplicate bug reports in bug tracking systems. A and unambiguous, since during product delivery, customers variation of this technique can be applied to code clones would like to confirm that the delivered product meets all and copy-and-paste snippets. their specified requirements. Automatic summarisation is another popular technique Having vague ambiguous requirements can hamper in NLP. These techniques try to generate a summary of a requirement verification. So text analysis techniques can given document by looking for the key points contained in it. be applied to the requirements document to determine There are two approaches to automatic summarisation. One whether there are any ambiguous or vague statements. is known as ‘extractive summarisation’, using which key For instance, consider a statement like, “Servicing of user phrases and sentences in the given document are extracted requests should be fast, and request waiting time should and put back together to provide a summary of the document. be low.” This statement is ambiguous since it is not clear The other is the ‘abstractive summarisation’ technique, which what exactly the customer’s expectations of ‘fast service’ is used to build an internal semantic representation of the or ‘low waiting time’ may be. NLP tools can detect such given document, from which key concepts are extracted, and ambiguous requirements. It is also important that there are a summary generated using natural language understanding. no logical inconsistencies in the requirements. For instance, The abstractive summarisation technique is close to how a requirement that “Login names should allow a maximum humans would summarise a given document. Typically, we of 16 characters,” and that “The login database will have a would proceed by building a knowledge representation of field for login names which is 8 characters wide,” conflict the document in our minds and then using our own words with each other. While the user interface allows up to a to provide a summary of the key concepts. Abstractive maximum of 16 characters, the backend login database summarisation is obviously more complex than extractive will support fewer characters, which is inconsistent with summarisation, but yields better summaries. the earlier requirement. Though currently such inconsistent Coming to SE artifacts, automatic summarisation requirements are flagged by human inspection, it is possible techniques can be applied to generate large bug reports. to design text analysis tools to detect them. They can also be applied to generate high level comments

20 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com Guest Column CodeSport of methods contained in source code. In this case, each method intent. Hence, the code written by the developer and the comment can be treated as an independent document and the high level associated with that piece of code should be consistent with each comment associated with that method or function is nothing but a other. Consider the simple code sample shown below: short summary of the method. Another popular text analysis technique involves the use of /* linux/drivers/scsi/in2000.c: */ language models, which enables predicting what the next word /* caller must hold instance lock */ would be in a particular sentence. This technique is typically used in Static int reset_hardware(…) optical character recognition (OCR) generated documents, where due { to OCR errors, the next word is not visible or gets lost and hence the …. tool needs to make a best case estimate of the word that may appear } there. A similar need also arises in the case of speech recognition static int in2000_bus_reset(…) systems. In case of poor speech quality, when a sentence is being { transcribed by the speech recognition tool, a particular word may ….. not be clear or could get lost in transmission. In such a case, the tool reset_hardware(); needs to predict what the missing word is and add it automatically. … Language modelling techniques can also be applied in intelligent } development environments (IDE) to provide ‘auto-completion’ suggestions to the developers. Note that in this case, the source code In the above code snippet, the developer has expressed the itself is being treated as text and is analysed. intention that ‘instance_lock’ must be held before the function Classifying a set of documents into specific categories is another ‘reset_hardware’ is called as a code comment. However, in the well-known text analysis technique. Consider a large number of news actual source code, the lock is not acquired before the call to articles that need to be categorised based on topics or their genre, such ‘reset_hardware’ is made. This is a logical inconsistency, which can as politics, business, sports, etc. A number of well-known text analysis arise either due to: (a) comments being outdated with respect to the techniques are available for document classification. Document source code; or (b) incorrect code. Hence, flagging such errors is classification techniques can also be applied to defect reports in SE to useful to the developer who can fix either the comment or the code, classify the category to which the defect belongs. For instance, security depending on which is incorrect. related bug reports need to be prioritised. While people currently inspect bug reports, or search for specific key words in a bug category My ‘must-read book’ for this month field in Bugzilla reports in order to classify bug reports, more robust This month’s book suggestion comes from one of our readers, and automated techniques are needed to classify defect reports in large Sharada, and her recommendation is very appropriate to the scale open source projects. Text analysis techniques for document current column. She recommends an excellent resource for natural classification can be employed in such cases. language processing—a book called, ‘Speech and Language Another important need in the SE lifecycle is to trace source Processing: An Introduction to Natural Language Processing’ by code to its origin in the requirements document. If a feature ‘X’ Jurafsky and Martin. The book describes different algorithms for is present in the source code, what is the requirement ‘Y’ in the NLP techniques and can be used as an introduction to the subject. requirements document which necessitated the development Thank you, Sharada, for your valuable recommendation. of this feature? This is known as traceability of source code to If you have a favourite programming book or article that you requirements. As source code evolves over time, maintaining think is a must-read for every programmer, please do send me traceability links automatically through tools is essential to a note with the book’s name, and a short write-up on why you scale out large software projects. Text analysis techniques think it is useful so I can mention it in the column. This would can be employed to connect a particular requirement from the help many readers who want to improve their software skills. requirements document to a feature in the source code and hence If you have any favourite programming questions/software automatically generate the traceability links. topics that you would like to discuss on this forum, please We have now covered automatic summarisation techniques send them to me, along with your solutions and feedback, at for generating summaries of bug reports and generating header sandyasm_AT_yahoo_DOT_com. Till we meet again next level comments for methods. Another possible use for such month, happy programming! techniques in SE artifacts is to enable the automatic generation of user documentation associated with that software project. By: Sandya Mannarswamy A number of text mining techniques have been employed to mine ‘stack overflow’ mailing lists to generate automatic user The author is an expert in systems software and is currently working with Hewlett Packard India Ltd. Her interests include compilers, documentation or FAQ documents for different software projects. multi-core and storage systems. If you are preparing for systems Regarding the identification of inconsistencies in the software interviews, you may find it useful to visit Sandya's LinkedIn requirements document, inconsistency detection techniques group ‘Computer Science Interview Training India’ at http://www. can be applied to source code comments also. It is a general linkedin.com/groups?home=HYPERLINK "http://www.linkedin.com/ expectation that source code comments express the programmer’s groups?home=&gid=2339182"

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 21 Exploring Software Guest Column

Anil Seth Exploring Big Data on a Desktop Getting Started with Hadoop Hadoop is a large scale, open source storage and processing framework for data sets. In this article, the author sets up Hadoop on a single node, takes the reader through testing it, and later tests it on multiple nodes.

edora 20 makes it easy to install Hadoop. Version 2.2 $ sudo systemctl start hadoop-namenode hadoop-datanode \ is packaged and available in the standard repositories. hadoop-nodemanager hadoop-resourcemanager FIt will place the configuration files in /etc/hadoop, with reasonable defaults so that you can get started easily. As You can find out the hdfs directories created as you may expect, managing the various Hadoop services is follows. The command may look complex, but you are integrated with systemd. running the ‘hadoop fs’ command in a shell as Hadoop's internal user, hdfs: Setting up a single node First, start an instance, with name h-mstr, in OpenStack $ sudo runuser hdfs -s /bin/bash /bin/bash -c “hadoop fs -ls using a Fedora Cloud image (http://fedoraproject. /” org/get-fedora#clouds). You may get an IP like Found 3 items 192.168.32.2. You will need to choose at least the drwxrwxrwt - hdfs supergroup 0 2014-07-15 13:21 /tmp m1.small flavour, i.e., 2GB RAM and 20GB disk. Add drwxr-xr-x - hdfs supergroup 0 2014-07-15 14:18 /user an entry in /etc/hosts for convenience: drwxr-xr-x - hdfs supergroup 0 2014-07-15 13:22 /var

192.168.32.2 h-mstr Testing the single node Create a directory with the right permissions for the user, Now, install and test the Hadoop packages on the virtual fedora, to be able to run the test scripts: machine by following the article, http://fedoraproject.org/ wiki/Changes/Hadoop: $ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs -mkdir /user/fedora" $ ssh fedora@h-mstr $ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs $ sudo install hadoop-common hadoop-common-native hadoop- -chown fedora /user/fedora" hdfs \ hadoop-mapreduce hadoop-mapreduce-examples hadoop-yarn Disable the firewall and iptables and run a mapreduce example. You can monitor the progress at It will download over 200MB of packages and take about http://h-mstr:8088/. Figure 1 shows an example running 500MB of disk space. on three nodes. Create an entry in the /etc/hosts file for h-mstr using the The first test is to calculate pi using 10 maps and name in /etc/hostname, e.g.: 1,000,000 samples. It took about 90 seconds to estimate the value of pi to be 3.1415844. 192.168.32.2 h-mstr h-mstr.novalocal $ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce- Now, you can test the installation. First, run a script to examples.jar pi 10 1000000 create the needed hdfs directories: In the next test, you create 10 million records of 100 $ sudo hdfs-create-dirs bytes each, that is, 1GB of data (~1 min). Then, sort it (~8 min) and, finally, verify it (~1 min). You may want to clean Then, start the Hadoop services using systemctl: up the directories created in the process:

22 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 23

Exploring Software Guest Column

$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce- examples.jar teragen 10000000 gendata $ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce- examples.jar terasort gendata sortdata $ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce- examples.jar teravalidate sortdata reportdata $ hadoop fs -rm -r gendata sortdata reportdata

Stop the Hadoop services before creating and working with multiple data nodes, and clean up the data directories:

$ sudo systemctl stop hadoop-namenode hadoop-datanode \ hadoop-nodemanager hadoop-resourcemanager $ sudo rm -rf /var/cache/hadoop-hdfs/hdfs/dfs/* Figure 1: OpenStack-Hadoop

Testing with multiple nodes The following steps simplify creation of multiple instances: ƒƒ Generate ssh keys for password-less log in from any node ƒƒ Delete the following lines from hdfs-site.xml: to any other node. ƒƒ In /etc/ssh/ssh_config, add the following to ensure that ssh does not prompt for authenticating a new host the first dfs.safemode.extension time you try to log in. 0 StrictHostKeyChecking no dfs.safemode.min.datanodes ƒƒ In /etc/hosts, add entries for slave nodes yet to be created: 1 192.168.32.2 h-mstr h-mstr.novalocal 192.168.32.3 h-slv1 h-slv1.novalocal ƒƒ Edit or create, if needed, slaves with the host names of the 192.168.32.4 h-slv2 h-slv2.novalocal data nodes:

Now, modify the configuration files located in /etc/hadoop. [fedora@h-mstr hadoop]$ cat slaves ƒƒ Edit core-site.xml and modify the value of fs.default.name h-slv1 by replacing localhost by h-mstr: h-slv2

ƒƒ Add the following lines to yarn-site.xml so that multiple fs.default.name node managers can be run: hdfs://h-mstr:8020 yarn.resourcemanager.hostname ƒƒ Edit mapred-site.xml and modify the value of mapred.job. h-mstr tracker by replacing localhost by h-mstr:

Now, create a snapshot, Hadoop-Base. Its creation will mapred.job.tracker take time. It may not give you an indication of an error if it h-mstr:8021 runs out of disk space!

24 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 25 Guest Column Exploring Software

Launch instances h-slv1 and h-slv2 serially using $ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs Hadoop-Base as the instance boot source. Launching of the -mkdir /user/fedora" first instance from a snapshot is pretty slow. In case the IP $ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs addresses are not the same as your guess in /etc/hosts, edit / -chown fedora /user/fedora" etc/hosts on each of the three nodes to the correct value. For your convenience, you may want to make entries for h-slv1 You can run the same tests again. Although you are using and h-slv2 on the desktop /etc/hosts file as well. three nodes, the improvement in the performance compared to The following commands should be run from Fedora on the single node is not expected to be noticeable as the nodes h-mstr. Reformat the namenode to make sure that the single are running on a single desktop. node tests are not causing any unexpected issues: The pi example took about one minute on the three nodes, compared to the 90 seconds taken earlier. Terasort took 7 $ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop minutes instead of 8. namenode -format" Start the hadoop services on h-mstr. Note: I used an AMD Phenom II X4 965 with 16GB $ sudo systemctl start hadoop-namenode hadoop-datanode RAM to arrive at the timings. All virtual machines and their hadoop-nodemanager hadoop-resourcemanager data were on a single physical disk.

Start the datanode and yarn services on the slave nodes: Both OpenStack and Mapreduce are a collection of interrelated services working together. Diagnosing problems, $ ssh -t fedora@h-slv1 sudo systemctl start hadoop-datanode especially in the beginning, is tough as each service has its hadoop-nodemanager own log files. It takes a while to get used to realising where to $ ssh -t fedora@h-slv2 sudo systemctl start hadoop-datanode look. However, once these are working, it is incredible how hadoop-nodemanager easy they make distributed processing!

Create the hdfs directories and a directory for user fedora By: Dr Anil Seth as on a single node: The author has earned the right to do what interests him. You can find him online at http://sethanil.com, http://sethanil. blogspot.com, and reach him via email at [email protected] $ sudo hdfs-create-dirs

OSFY Magazine Attractions During 2014-15

Month Theme Featured List buyers’ guide

March 2014 Network monitoring Security ------

April 2014 Android Special Anti Virus Wifi Hotspot Devices

May 2014 Backup and Data Storage Certification External Storage

June 2014 Open Source on Windows Mobile Apps UTMs fo SMEs

July 2014 Firewall and Network security Web Hosting Solutions Providers MFD Printers for SMEs

August 2014 Kernel Development Big Data solution Providers SSDs for Servers

September 2014 Open Source for Start-ups Cloud Android Devices

October 2014 Mobile App Development Training on Programming Languages Projectors

November 2014 Cloud Special Virtualisation Solutions Providers Network Switches and Routers

December 2014 Web Development Leading Ecommerce Sites AV Conferencing

January 2015 Programming Languages IT Consultancy Service Providers Laser Printers for SMEs

February 2015 Top 10 of Everything on Open Source Storage Solutions Providers Wireless Routers

24 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 25 Developers Insight Improve Python Code by Using a Profiler

The line_profiler gives a line-by-line analysis of the Python code and can thus identify bottlenecks that slow down the execution of a program. By making modifications to the code based on the results of this profiler, developers can improve the code and refine the program.

ave you ever wondered which module is slowing b) For Fedora systems: down your Python program and how to optimise Hit? Well, there are ‘profilers’ that can come to sudo yum install -y mercurial python python3 python- your rescue. Profiling, in simple terms, is the analysis of a program Note: 1. I have used the ‘–y’ argument to to measure the memory used by a certain module, automatically install the packages after being tracked by frequency and duration of function calls, and the time the yum installer. complexity of the same. Such profiling tools are termed 2. Mac users can use Homebrew to install these packages. profilers. This article will discuss the line_profiler for Python. Cython is a pre-requisite because the source releases require a C compiler. If the Cython package is not found or is Installation too old in your current Linux distribution version, install it by Installing pre-requisites: Before installing line_profiler running the following command in a terminal: make sure you install these pre-requisites: a) For Ubuntu/Debian-based systems (recent versions): sudo pip install Cython

sudo -get install mercurial python python3 python-pip Note: Mac OS X users can install Cython using pip. python3-pip Cython Cython3

26 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 27

Developers Insight

Cloning line_profiler: Let us begin by cloning the line_profiler source code from bitbucket. To do so, run the following command in a terminal: hg clone https://bitbucket.org/ robertkern/line_profiler

The above repository is the official line_profiler repository, with support for python 2.4 - 2.7.x. For python 3.x support, we will need to clone a fork of the official Figure 1: line_profiler output source code that provides python 3.x compatibility for line_profiler and kernprof. Note: I have combined both the commands in a single line separated by a semicolon ‘;’ to immediately show the hg clonehttps://bitbucket.org/kmike/line_profiler profiled results. Installing line_profiler: Navigate to the cloned You can run the two commands separately or run repository by running the following command in a terminal: kernprof.py with ‘–v’ argument to view the formatted result in the terminal. cd line_profiler kernprof.py -l compiles the profiled function in example.py line by line; hence, the argument -l stores To build and install line_profiler in your system, run the the result in a binary file with a .lprof extension. (Here, following command: example.py.lprof) a) For official source (supported by python 2.4 - 2.7.x): We then run line_profiler on this binary file by using the ‘-m line_profiler’ argument. Here ‘-m’ is followed by the sudo python setup.py install module name, i.e., line_profiler. Case study: We will use the Gnome-Music source code b) For forked source (supported by python 3.x): for our case study. There is a module named _connect_view in the view.py file, which handles the different views (artists, sudo python3 setup.py install albums, playlists, etc) within the music player. This module is reportedly running slow because a variable is initialised each Using line_profiler time the view is changed. Adding profiler to your code: Since line_profiler has been By profiling the source code, we get the following result: designed to be used as a decorator, we need to decorate the specified function using a ‘@profile’ decorator. We can do so Wrote profile results to gnome-music.lprof by adding an extra line before a function, as follows: Timer unit: 1e-06 s

@profile File: ./gnomemusic/view.py def foo(bar): Function: _connect_view at line 211 ..... Total time: 0.000627 s

Running line_profiler: Once the ‘slow’ module is Line # Hits Time Per Hit % Time Line Contents profiled, the next step is to run the line_profiler, which ======will give line-by-line computation of the code within the 211 @profile profiled function. 212 def _connect_view(self): Open a terminal, navigate to the folder where the ‘.py’ file 213 4 205 51.2 32.7 vadjustment = is located and type the following command: self.view.get_vadjustment() 214 4 98 24.5 15.6 self._ kernprof.py -l example.py; python3 -m line_profilerexample. adjustmentValueId = py.lprof vadjustment.connect( 215 4 79 19.8 12.6 'value-changed',

28 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 29 Insight Developers

216 4 245 61.2 39.1 self._on_scrolled_win_change)

In the above code, line no 213, vadjustment = self.view.get_ vadjustment(), is called too many times, which makes the process slower than expected. After caching (initialising) it in the init function, we get the following result tested under the same condition. You can see that there is a significant improvement in the results (Figure 2). Figure 2: Optimised code line_profiler output

Wrote profile results to gnome-music.lprof to the total amount of recorded time spent in the function. Timer unit: 1e-06 s ƒƒ Line content: It displays the actual source code.

File: ./gnomemusic/view.py Note: If you make changes in the source code you Function: _connect_view at line 211 need to run the kernprof and line_profiler again in order to Total time: 0.000466 s profile the updated code and get the latest results.

Line # Hits Time Per Hit % Time Line Contents Advantages ======Line_profiler helps us to profile our code line-by-line, 211 @profile giving the number of hits, time taken for each hit and 212 def _connect_view(self): %time. This helps us to understand which part of our code 213 4 86 21.5 18.5 self._adjustmentValueId = is running slow. It also helps in testing large projects and vadjustment.connect( the time spent by modules to execute a particular function. 214 4 161 40.2 34.5 'value-changed', Using this data, we can commit changes and improve our 215 4 219 54.8 47.0 self._on_scrolled_win_change) code to build faster and better programs. Understanding the output Here is an analysis of the output shown in the above snippet. References ƒƒ Function: Displays the name of the function that is [1] http://pythonhosted.org/line_profiler/ profiled and its line number. [2] http://jacksonisaac.wordpress.com/2013/09/08/using- ƒƒ Line#: The line number of the code in the respective file. line_profiler-with-python/ [3] https://pypi.python.org/pypi/line_profiler ƒƒ Hits: The number of times the code in the corresponding [4] https://bitbucket.org/robertkern/line_profiler line was executed. [5] https://bitbucket.org/kmike/line_profiler ƒƒ Time: Total amount of time spent in executing the line in ‘Timer unit’ (i.e., 1e-06s here). This may vary from By: Jackson Isaac system to system. The author is an active open source contributor to projects ƒƒ Per hit: The average amount of time spent in executing like gnome-music, Mozilla Firefox and Mozillians. Follow the line once in ‘Timer unit’. him on jacksonisaac.wordpress.com or email him at ƒƒ % time: The percentage of time spent on a line with respect [email protected]

28 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 29 Developers Insight Understanding the Document Object Model (DOM) in Mozilla

This article is an introduction to the DOM programming interface and the DOM inspector, which is a tool that can be used to inspect and edit the live DOM of any Web document or XUL application.

he Document Object Model (DOM) is a programming objects. For example, the document object that represents the interface for HTML and XML documents. It provides document itself, the tableObject that implements the special Ta structured representation of a document and it HTMLTableElement DOM interface to access the HTML defines a way that the structure can be accessed from the tables, and so forth. programs so that they can change the document structure, style and content. The DOM provides a representation of the Why is DOM important? document as a structured group of nodes and objects that have ‘Dynamic HTML’ (DHTML) is a term used by some vendors properties and methods. Essentially, it connects Web pages to to describe the combination of HTML, style sheets and scripts or programming languages. scripts that allow documents to be animated. The W3C DOM A Web page is a document that can either be displayed in working group is aiming to make sure interoperable and the browser window or as an HTML source that is in the same language-neutral solutions are agreed upon. document. The DOM provides another way to represent, store As Mozilla claims the title of ‘Web Application Platform’, and manipulate that same document. In simple terms, we can support for the DOM is one of the most requested features; in say that the DOM is a fully object-oriented representation of a fact, it is a necessity if Mozilla wants to be a viable alternative Web page, which can be modified by any scripting language. to the other browsers. The user interface of Mozilla (also The W3C DOM standard forms the basis of the DOM Firefox and Thunderbird) is built using XUL and the DOM to implementation in most modern browsers. Many browsers manipulate its own user interface. offer extensions beyond the W3C standard. All the properties, methods and events available for How do I access the DOM? manipulating and creating the Web pages are organised into You don’t have to do anything special to begin using the

30 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 31 Insight Developers

Figure 1: DOM inspector Figure 2: Inspecting content documents DOM. Different browsers have different implementations of it, which exhibit varying degrees of conformity to the actual representing the HTMLFormElement gets its name property DOM standard but every browser uses some DOM to make from the HTMLFormElement interface but its className Web pages accessible to the script. property from the HTMLElement interface. In both cases, the When you create a script, whether it’s inline in a property you want is simply in the form object. script element or included in the Web page by means of a script loading instruction, you can immediately begin Interfaces and objects using the API for the document or window elements. Many objects borrow from several different interfaces. The This is to manipulate the document itself or to get at the table object, for example, implements a specialised HTML children of that document, which are the various elements table element interface, which includes such methods as in the Web page. createCaption and insertRow. Since an HTML element is Your DOM programming may be something as simple as also, as far as the DOM is concerned, a node in the tree of the following, which displays an alert message by using the nodes that makes up the object model for a Web page or an alert( ) function from a window object or it may use more XML page, the table element also implements the more basic sophisticated DOM methods to actually create them, as in the node interface, from which the element derives. longer examples that follow: When you get a reference to a table object, as in the following example, you routinely use all three of these interfaces interchangeably on the object, perhaps unknowingly: Aside from the script element in which JavaScript is defined, this JavaScript sets a function to run when the var table = document.getElementById (“table”); document is loaded. This function creates a new element H1, var tableAttrs = table.attributes; // Node/Element interface adds text to that element, and then adds H1 to the tree for this for (var i = 0; i < tableAttrs.length; i++) { document, as shown below: // HTMLTableElement interface: border attribute if(tableAttrs[i].nodeName.toLowerCase() == “border”) table.border = “1”; } generally used most often in DOM programming. In simple terms, the window object represents something like the browser, and the document object is the root of the document itself. The element inherits from the generic node interface and, together, these two interfaces provide many of the methods and properties you use on individual DOM interfaces elements. These elements may also have specific interfaces These interfaces just give you an idea about the actual things for dealing with the kind of data those elements hold, as in that you can use to manipulate the DOM hierarchy. The object the table object example.

30 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 31 Developers Insight

Figure 3: Inspecting Chrome documents

Figure 4: Inspecting arbitrary URLs

easily accessed from scripts. An introduction to the DOM inspector The DOM inspector is a Mozilla extension that you can access from the Tools -> Web Development menu in SeaMonkey, or by selecting the DOM inspector menu item from the Tools menu in Firefox and Thunderbird or by Figure 5: Inspecting a Web page using Ctrl/Cmd+Shift+I in either application. The DOM inspector is a ‘standalone’ extension; it supports all toolkit The following are a few common APIs in XML and Web applications, and it’s possible to embed it in your own page scripting that show the use of DOM: XULRunner app. The DOM inspector can serve as a sanity • document.getElementById (id) check to verify the state of the DOM, or it can be used to • element.getElementsByTagName (name) manipulate the DOM manually, if desired. • document.createElement (name) When you first start the DOM inspector, you are presented • parentNode.appendChild (node) with a two-pane application window that looks a little like the • element.innerHTML main Mozilla browser. Like the browser, the DOM inspector • element.style.left includes an address bar and some of the same menus. In • element.setAttribute SeaMonkey, additional global menus are available. • element.getAttribute • element.addEventListener Using the DOM inspector • window.content Once you’ve opened the document for the page you are • window.onload interested in Chrome, you’ll see that it loads the DOM nodes • window.dump viewer in the document pane and the DOM node viewer in • window.scrollTo the object pane. In the DOM nodes viewer, there should be a structured, hierarchical view of the DOM. Testing the DOM API By clicking around in the document pane, you’ll see Here, you will be provided samples for every interface that the viewers are linked; whenever you select a new node that you can use in Web development. In some cases, the from the DOM nodes viewer, the DOM node viewer is samples are complete HTML pages, with the DOM access automatically updated to reflect the information for that node. in a the onclick event of Javascript. AngularJS, “Angular services are substitutable objects dependencies. As in: var testapp = angular.module( ‘testapp’, [ ] ); One term to be explained here is ‘$scope’. To quote testapp.controller ( ‘testcont’, function( $window ) { from the developer guide: “Scope is an object that //body of controller refers to the application model.” With the help of scope, }); the model variables can be initialised and accessed. In the above example, when the button is clicked the disp( ) comes into play, i.e., the scope is assigned with To define a custom service, write the following: a behaviour. Inside disp( ), the model variable name is accessed using scope. testapp.factory ('serviceName', function( ) { Views and routes: In any usual application, we var obj; navigate to different pages. In an SPA, instead of pages, we return obj; // returned object will be injected to have views. So, you can use views to load different parts the component of your application. Switching to different views is done //that has called the service through routing. For routing, we make use of the ngRoute }); and ngView directives: Testing var miniApp = angular.module( 'miniApp', ['ngRoute'] ); Testing is done to correct your code on-the-go and avoid ending up with a pile of errors on completing your app’s miniApp.config(function( $routeProvider ){ development. Testing can get complicated when your $routeProvider.when( '/home', { templateUrl: app grows in size and APIs start to get tangled up, but 'partials/home.html' } ); Angular has got its own defined testing schemes. Usually, $routeProvider.when( '/animal', {templateUrl: two kinds of testing are employed, unit and end-to-end 'partials/animals.html' } ); testing (E2E). Unit testing is used to test individual API $routeProvider.otherwise( { redirectTo: '/home' } ); components, while in E2E testing, the working of a set of }); components is tested. The usual components of unit testing are describe( ), ngRoute enables routing in applications and beforeEach( ) and it( ). You have to load the angular module $routeProvider is used to configure the routes. home. before testing and beforeEach( ) does this. Also, this function

42 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 43 Developers Insight makes use of the injector method to inject dependencies. Competing technologies The test to be conducted is given in it( ). The test suite is Features Ember.js AngularJS Backbone.js describe( ), and both beforeEach( ) and it( ) come inside Routing Yes Yes Yes it. E2E testing makes use of all the above functions. Views Yes Yes Yes One other function used is expect( ). This creates Two-way binding Yes Yes No ‘expectations’, which verify if the particular application's state (value of a variable or URL) is the same as the The chart above covers only the core features of the three expected values. frameworks. Angular is the oldest of the lot and has the Recommended frameworks for unit testing are biggest community. Jasmine and Karma, and for E2E testing, Protractor is the one to go with. References [1] http://singlepageappbook.com/goal.html [2] https://github.com/angular/angular.js Who uses AngularJs? [3] https://docs.angularjs.org/guide/ Some of the following corporate giants use AngularJS: [4] http://karma-runner.github.io/0.12/index.html ƒƒ Google [5] http://viralpatel.net/blogs/angularjs-introduction-hello- world-tutorial/ ƒƒ Sony (YouTube on PS3) [6] https://builtwith.angularjs.org/ ƒƒ Virgin America ƒƒ Nike By: Tina Johnson ƒƒ msnbc (msnbc.com) The author is a FOSS enthusiast who has contributed to You can find a lot of interesting and innovative apps in Mediawiki and Mozilla's Bugzilla. She is also working on a project to build a browser (using AngularJS) for autistic children. the ‘Built with AngularJS’ page.

To be continued from page.... 37

There are some statements like condition checking highest precedence and is right-associative. For example: where ‘f b1’ can be computed even without requiring the subsequent arguments, and hence the foldr function can *Main> (reverse ((++) "yrruC " (unwords ["skoorB", work with infinite lists. There is also a strict version of "lleksaH"]))) foldl (foldl’) that forces the computation before proceeding "Haskell Brooks Curry" with the recursion. If you want a reference to a matched pattern, you can use You can rewrite the above using the function application the as pattern syntax. The tail function accepts an input list operator that is right-associative: and returns everything except the head of the list. You can write a tailString function that accepts a string as input and Prelude> reverse $ (++) "yrruC " $ unwords ["skoorB", returns the string with the first character removed: "lleksaH"] "Haskell Brooks Curry" tailString :: String -> String tailString "" = "" You can also use the dot notation to make it even more tailString input@(x:xs) = "Tail of " ++ input ++ " is " ++ xs readable, but the final argument needs to be evaluated first; hence, you need to use the function application operator for it: The entire matched pattern is represented by input in the above code snippet. *Main> reverse . (++) "yrruC " . unwords $ ["skoorB", Functions can be chained to create other functions. This is "lleksaH"] called ‘composing’ functions. The mathematical definition is "Haskell Brooks Curry" as under:

(f o g)(x) = f(g(x)) By: Shakthi Kannan This dot (.) operator has the highest precedence and is The author is a free software enthusiast and blogs left-associative. If you want to force an evaluation, you can at shakthimaan.com. use the function application operator ($) that has the second

44 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | PB Let's Try Developers

Use Bugzilla to Manage Defects in Software

In the quest for excellence in software products, developers have to go through the process of defect management. The tool of choice for defect containment is Mozilla's Bugzilla. Learn how to install, configure and use it to file a bug report and act on it.

n any project, defect management and various types of them are on your Linux system before proceeding with the testing play key roles in ensuring quality. Defects need installation. This specific installation covers MySQL as the Ito be logged, tracked and closed to ensure the project backend database. meets quality expectations. Generating defect trends also helps project managers to take informed decisions and make Step 2: User and database creation the appropriate course corrections while the project is being Before proceeding with the installation, the user and database executed. Bugzilla is one of the most popular open source need to be created by following the steps mentioned below. defect management tools and helps project managers to track The names used here for the database or the users are the complete lifecycle of a defect. specific to this installation, which can change between installations. Installation and configuration of Bugzilla ƒƒ Start the service by issuing the following command:

Step 1: Getting the source code $/etc/rc.d/init.d/mysql start Bugzilla is part of the Mozilla foundation. Its latest releases are available from the official website. This article will ƒƒ Trigger MySQL by issuing the following command (you be covering the installation of Bugzilla version 4.4.2. will be asked for the root password, so ensure you keep it The steps mentioned here should apply to later releases handy): as well. However, for version-specific releases, check the appropriate release notes. Here is the URL for downloading $mysql -u root -p Bugzilla version 4.4.2 on a Linux system: http://www. bugzilla.org/releases/4.4.2/ ƒƒ Use the following keywords as shown in the MySQL Pre-requisites for Bugzilla include a CGI-enabled Web prompt for creating a user in the database for Bugzilla: server (an Apache http server), a database engine (MySQL, PostgreSQL, etc) and the latest modules. Ensure all of mysql > CREATE USER 'bugzilla'@'localhost' IDENTIFIED BY

PB | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 45 Developers Let's Try

Figure 1: Configuring Bugzilla by changing the localconfig file

'password'; > GRANT ALL PRIVILEGES ON *. * TO 'bugzilla'@'localhost'; Figure 2: Bugzilla main page > FLUSH PRIVILEGES; mysql > CREATE DATABASE bugzilla_db CHARACTER SET utf8; > GRANT SELECT,INSERT,UPDATE,DELETE,INDEX,ALTER,CREATE,DROP, REFERENCES ON bugzilla_db.* TO 'bugzilla'@'localhost' IDENTIFIED BY 'cspasswd'; > FLUSH PRIVILEGES; Figure 3: Defect lifecycle > QUIT

ƒƒ Use the following command to connect the user with the database:

$mysql -u bugzilla -p bugzilla_db $mysql > use bugzilla_db

Step 3: Bugzilla installation and configuration After downloading the Bugzilla archive from the URL mentioned above, untar the package into the /var/www directory. All the configuration related information can Figure 4: New account creation be modified by the localconfig file. To start with, set the variable $webservergroup as ‘www' and set other items as mentioned in Figure 1. Defect lifecycle management Followed by the configuration, installation can be The main purpose of Bugzilla is to manage the defect’s completed by executing the following Perl script. Ensure this lifecycle. Defects are created and logged in various phases of script is executed with root privileges: the project (e.g., functional testing), where they are created by the test engineer and assigned to development engineers for $ ./checksetup.pl resolution. Along with that, managers or team members need to be aware of the change in the state of the defect to ensure Step 4: Integrating Bugzilla with Apache that there is a good amount of traceability of the defects. Insert the following lines in the Apache server configuration When the defect is created, it is given a ‘new’ state, after file (server.xml) to integrate Bugzilla into it. Place the which it is assigned to a development engineer for resolution. directory bugzilla inside www in our build folder: Subsequently, it will get ‘resolved’ and eventually be moved to the ‘closed’ state. AddHandler cgi-script.cgi Step 1: User account creation Options +ExecCGI To start using Bugzilla, various user accounts have to be DirectoryIndex index.cgi index.html created. In this example, Bugzilla is deployed in a server AllowOverride Limit FileInfo Indexes Options named ‘hydrogen’. On the home page, click the ‘New Account’ link available in the header/footer of the pages (refer to Figure 4). You will be asked for your email address; enter Our set up is now ready. Let’s hit the address in the it and click the ‘Send’ button. After registration is accepted, browser to see the home page of our freshly deployed Web you should receive an email at the address you provided application (http://localhost/bugzilla). confirming your registration. Now all you need to do is to

46 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 47 Let's Try Developers

Figure 6: Defect resolution

Figure 5: New defect creation Figure 7: Simple search click the ‘Log in’ link in the header/footer at the bottom of the page in your browser, enter your email address and the password you just chose into the login form, and click on the ‘Log in’ button. You will be redirected to the Bugzilla home page for defect interfacing.

Step 2: Reporting the new bug 1. Click the ‘New’ link available in the header/footer of the pages, or the ‘File a bug’ option displayed on the home page of the Bugzilla installation as shown in Figure 5. 2. Select the product in which you found a bug. Please note that the administrator will be able to create an appropriate Figure 8: Simple dashboard of defects product and corresponding versions from his account, which is not demonstrated here. Step 4: Reports and dashboards 3. You now see a form on which you can specify the Typically, in large scale projects, there could be thousands of component, the version of the program you were using, the defects logged and fixed by hundreds of development and test operating system and platform your program is running on, engineers. To monitor the project at various phases, generation of and the severity of the bug, as shown in Figure 5. reports and dashboards becomes very important. Bugzilla offers 4. If there is any attachment like a screenshot of the bug, very simple but very powerful search and reporting features with attach it using the option ‘Add an attachment’ shown at which all the necessary information can be obtained immediately. the bottom of the page, else click on ‘Submit Bug’. By exploring the ‘Search’ and ‘Reports’ options, one can easily figure out ways to generate reports. A couple of simple examples Step 3: Defect resolution and closure are provided in Figure 7 (search) and Figure 8 (reports). Outputs Once the bug is filed, the assignees (typically, developers) can be exported to formats like CSV for further analysis. get an email when the bug gets fixed. If the developers Bugzilla is a very simple but powerful open source tool fix the bug successfully by adding the details like a bug that helps in complete defect management in projects. Along fixing summary and then marking the status as ‘resolved’ with the information provided above, Bugzilla also exposes its in the status button, they can route the defect back to the source code, which can be explored for further scripting and tester or to the development team leader for further review. programming. This helps to make Bugzilla a super-customised, This can be easily done by changing the ‘assignee’ field defect-tracking tool for effectively managing defects. of a defect and filling it with an appropriate email ID. When the developers complete fixing the defect, it can By: Satyanarayana Sampangi be marked as shown in Figure 6. When the test engineers receive the resolved defect report, they can verify it and Satyanarayana Sampangi is a Member - Embedded software at Emertxe Information Technologies (http://www.emertxe.com). His mark the status as ‘closed’. At every step, notes from each area of interest lies in Embedded C programming combined with individual are to be captured and logged along with the data structures and micro-controllers. He likes to experiment with time-stamp. This helps in backtracking the defect in case C programming and open source tools in his spare time to explore any clarifications are required. new horizons. He can be reached at [email protected]

46 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 47 Developers How To An Introduction to Device Drivers in the Linux Kernel In the article ‘An Introduction to the Linux Kernel’ in the August 2014 issue of OSFY, we wrote and compiled a kernel module. In the second article in this series, we move on to device drivers.

ave you ever wondered how a computer ƒƒ Device: This can be the actual device present at the plays audio or shows video? The hardware level, or a pseudo device. Hanswer is: by using device drivers. Let us take an example where a user-space A few years ago we would always install application sends data to a character device. audio or video drivers after installing MS Instead of using an actual device we are going to Windows XP. Only then we were able use a pseudo device. As the name suggests, this to listen the audio. Let us explore device device is not a physical device. In GNU/Linux / drivers in this column. dev/null is the most commonly used pseudo A device driver (often referred to as device. This device accepts any kind of data ‘driver') is a piece of software that controls (i.e., input) and simply discards it. And it a particular type of device which is doesn't produce any output. connected to the computer system. Let us send some data to the /dev/null It provides a software interface to pseudo device: the hardware device, and enables access to the operating system [mickey]$ echo -n 'a' > /dev/null and other applications. There are various types of drivers present In the above example, echo is a user- in GNU/Linux such as Character, space application and null is a special Block, Network and USB file present in the /dev directory. There drivers. In this column, is a null driver present in the kernel to we will explore only control the pseudo device. character drivers. To send or receive data to and Character drivers from the device or application, are the most common use the corresponding device drivers. They provide file that is connected to the driver unbuffered, direct access to hardware through the Virtual File System (VFS) devices. One can think of character drivers as a layer. Whenever an application wants to perform any long sequence of bytes -- same as regular files but can be operation on the actual device, it performs this on the accessed only in sequential order. Character drivers support device file. The VFS layer redirects those operations to at least the open(), close(), read() and write() operations. The the appropriate functions that are implemented inside the text console, i.e., /dev/console, serial consoles /dev/stty*, and driver. This means that whenever an application performs audio/video drivers fall under this category. the open() operation on a device file, in reality the open() To make a device usable there must be a driver present function from the driver is invoked, and the same concept for it. So let us understand how an application accesses data applies to the other functions. The implementation of these from a device with the help of a driver. We will discuss the operations is device-specific. following four major entities. ƒƒ User-space application: This can be any simple utility Major and minor numbers like echo, or any complex application. We have seen that the echo command directly sends data to ƒƒ Device file: This is a special file that provides an interface the device file. Hence, it is clear that to send or receive data to for the driver. It is present in the file system as an ordinary and from the device, the application uses special device files. file. The application can perform all supported operation on But how does communication between the device file and the it, just like for an ordinary file. It can move, copy, delete, driver take place? It happens via a pair of numbers referred to rename, read and write these device files. as ‘major’ and ‘minor’ numbers. ƒƒ Device driver: This is the software interface for the device The command below lists the major and minor numbers and resides in the kernel space. associated with a character device file:

48 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 49 How To Developers

[bash]$ ls -l /dev/null #define MINORMASK ((1U << MINORBITS) - 1) crw-rw-rw- 1 root root 1, 3 Jul 11 20:47 /dev/null #define MAJOR(dev) ((unsigned int) ((dev) >> MINORBITS)) In the above output there are two numbers separated by a #define MINOR(dev) ((unsigned int) ((dev) & MINORMASK)) comma (1 and 3). Here, ‘1’ is the major and ‘3’ is the minor number. The major number identifies the driver associated If you have major and minor numbers and you want to with the device, i.e., which driver is to be used. The minor convert them to the dev_t type, the MKDEV macro will do number is used by the kernel to determine exactly which the needful. The definition of the MKDEV macro from the device is being referred to. For instance, a hard disk may header file is given below: have three partitions. Each partition will have a separate minor number but only one major number, because the same #define MKDEV(ma,mi) (((ma) << MINORBITS) | (mi)) storage driver is used for all the partitions. Older kernels used to have a separate major number We now know what major and minor numbers are and the for each driver. But modern Linux kernels allow multiple role they play. Let us see how we can allocate major numbers. drivers to share the same major number. For instance, / Here is the prototype of the register_chrdev(): dev/full, /dev/null, /dev/random and /dev/zero use the same major number but different minor numbers. The output int register_chrdev(unsigned int major, const char *name, below illustrates this: struct file_operations *fops);

[bash]$ ls -l /dev/full /dev/null /dev/random /dev/zero This function registers a major number for character crw-rw-rw- 1 root root 1, 7 Jul 11 20:47 /dev/full devices. Arguments of this function are self-explanatory. The crw-rw-rw- 1 root root 1, 3 Jul 11 20:47 /dev/null major argument implies the major number of interest, name crw-rw-rw- 1 root root 1, 8 Jul 11 20:47 /dev/random is the name of the driver and appears in the /proc/devices area crw-rw-rw- 1 root root 1, 5 Jul 11 20:47 /dev/zero and, finally, fops is the pointer to the file_operations structure. Certain major numbers are reserved for special drivers; The kernel uses the dev_t type to store major and minor hence, one should exclude those and use dynamically allocated numbers. dev_t type is defined in the header major numbers. To allocate a major number dynamically, provide file. Given below is the representation of dev_t type from the the value zero to the first argument, i.e., major == 0. This header file: function will dynamically allocate and return a major number. To deallocate an allocated major number use the #ifndef _LINUX_TYPES_H unregister_chrdev() function. The prototype is given below #define _LINUX_TYPES_H and the parameters of the function are self-explanatory:

#define __EXPORTED_HEADERS__ void unregister_chrdev(unsigned int major, const char *name) #include The values of the major and name parameters must be typedef __u32 __kernel_dev_t; the same as those passed to the register_chrdev() function; otherwise, the call will fail. typedef __kernel_dev_t dev_t; File operations dev_t is an unsigned 32-bit integer, where 12 bits are used So we know how to allocate/deallocate the major number, but to store the major number and the remaining 20 bits are used to we haven't yet connected any of our driver’s operations to the store the minor number. But don't try to extract the major and major number. To set up a connection, we are going to use minor numbers directly. Instead, the kernel provides MAJOR the file_operations structure. This structure is defined in the and MINOR macros that can be used to extract the major and header file. minor numbers. The definition of the MAJOR and MINOR Each field in the structure must point to the function in the macros from the header file is given below: driver that implements a specific operation, or be left NULL for unsupported operations. The example given below illustrates that. #ifndef _LINUX_KDEV_T_H Without discussing lengthy theory, let us write our first #define _LINUX_KDEV_T_H ‘null’ driver, which mimics the functionality of a /dev/null pseudo device. Given below is the complete working code for #include the ‘null’ driver. Open a file using your favourite text editor and save the #define MINORBITS 20 code given below as null_driver.c:

48 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 49 Developers How To

#include #include static void __exit null_exit(void) #include { #include unregister_chrdev(major, name); printk(KERN_INFO "Device unregistered successfully.\n"); static int major; } static char *name = "null_driver"; module_init(null_init); static int null_open(struct inode *i, struct file *f) module_exit(null_exit); { printk(KERN_INFO "Calling: %s\n", __func__); MODULE_AUTHOR("Narendra Kangralkar."); return 0; MODULE_LICENSE("GPL"); } MODULE_DESCRIPTION("Null driver"); static int null_release(struct inode *i, struct file *f) Our driver code is ready. Let us compile and insert the { module. In the article last month, we did learn how to write printk(KERN_INFO "Calling: %s\n", __func__); Makefile for kernel modules. return 0; } [mickey]$ make static ssize_t null_read(struct file *f, char __user *buf, [root]# insmod ./null_driver.ko size_t len, loff_t *off) { We are now going to create a device file for our driver. printk(KERN_INFO "Calling: %s\n", __func__); But for this we need a major number, and we know that return 0; our driver's register_chrdev() function will allocate the } major number dynamically. Let us find out this dynamically allocated major number from /proc/devices, which shows the static ssize_t null_write(struct file *f, const char __user currently loaded kernel modules: *buf, size_t len, loff_t *off) { [root]# grep "null_driver" /proc/devices printk(KERN_INFO "Calling: %s\n", __func__); 248 null_driver return len; } From the above output, we are going to use ‘248’ as a major number for our driver. We are only interested in the static struct file_operations null_ops = major number, and the minor number can be anything within { a valid range. I'll use ‘0’ as the minor number. To create the .owner = THIS_MODULE, character device file, use the mknod utility. Please note that to .open = null_open, create the device file you must have superuser privileges: .release = null_release, .read = null_read, [root]# mknod /dev/null_driver c 248 0 .write = null_write }; Now it's time for the action. Let us send some data to the pseudo device using the echo command and check the output static int __init null_init(void) of the dmesg command: { major = register_chrdev(0, name, &null_ops); [root]# echo "Hello" > /dev/null_driver if (major < 0) { printk(KERN_INFO "Failed to register driver."); [root]# dmesg return -1; Device registered successfully. } Calling: null_open Calling: null_write printk(KERN_INFO "Device registered successfully.\n"); Calling: null_release return 0; } Yes! We got the expected output. When open, write, close

50 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 51 How To Developers

operations are performed on a device file, the appropriate is performed on a device file, the driver should transfer len bytes functions from our driver's code get called. Let us perform the of data to the device and update the file offset off accordingly. read operation and check the output of the dmesg command: Our null driver accepts input of any length; hence, return value is always len, i.e., all bytes are written successfully. [root]# cat /dev/null_driver In the next step we have initialised the file_operations structure with the appropriate driver's function. In initialisation [root]# dmesg function we have done a registration related job, and we are Calling: null_open deregistering the character device in cleanup function. Calling: null_read Calling: null_release Implementation of the full pseudo driver Let us implement one more pseudo device, namely, full. Any write To make things simple I have used printk() statements in operation on this device fails and gives the ‘ENOSPC’ error. This every function. If we remove these statements, then /dev/null_ can be used to test how a program handles disk-full errors. Given driver will behave exactly the same as the /dev/null pseudo below is the complete working code of the full driver: device. Our code is working as expected. Let us understand the details of our character driver. #include First, take a look at the driver's function. Given below are the #include prototypes of a few functions from the file_operations structure: #include #include int (*open)(struct inode *i, struct file *f); int (*release)(struct inode *i, struct file *f); static int major; ssize_t (*read)(struct file *f, char __user *buf, size_t len, static char *name = "full_driver"; loff_t *off); ssize_t (*write)(struct file *f, const char __user buf*, static int full_open(struct inode *i, struct file *f) size_t len, loff_t *off); { return 0; The prototype of the open() and release() functions is } exactly same. These functions accept two parameters—the first is the pointer to the inode structure. All file-related information static int full_release(struct inode *i, struct file *f) such as size, owner, access permissions of the file, file creation { timestamps, number of hard-links, etc, is represented by the return 0; inode structure. And each open file is represented internally by } the file structure. The open() function is responsible for opening the device and allocation of required resources. The release() static ssize_t full_read(struct file *f, char __user *buf, function does exactly the reverse job, which closes the device size_t len, loff_t *off) and deallocates the resources. { As the name suggests, the read() function reads data from the return 0; device and sends it to the application. The first parameter of this } function is the pointer to the file structure. The second parameter is the user-space buffer. The third parameter is the size, which static ssize_t full_write(struct file *f, const char __user implies the number of bytes to be transferred to the user space *buf, size_t len, loff_t *off) buffer. And, finally, the fourth parameter is the file offset which { updates the current file position. Whenever the read() operation return -ENOSPC; is performed on a device file, the driver should copy len bytes } of data from the device to the user-space buffer buf and update the file offset off accordingly. This function returns the number static struct file_operations full_ops = of bytes read successfully. Our null driver doesn't read anything; { that is why the return value is always zero, i.e., EOF. .owner = THIS_MODULE, The driver's write() function accepts the data from the .open = full_open, user-space application. The first parameter of this function is the .release = full_release, pointer to the file structure. The second parameter is the user- .read = full_read, space buffer, which holds the data received from the application. .write = full_write The third parameter is len which is the size of the data. The }; fourth parameter is the file offset. Whenever the write() operation To be continued on page.... 55

50 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 51 Developers Insight Creating Dynamic Web Portals Using Joomla and WordPress

Joomla and WordPress are popular Web content management systems, which provide authoring, collaboration and administration tools designed to allow amateurs to create and manage websites with ease.

owadays, every organisation wishes to have an online be called by the programmer depending upon the module and presence for maximum visibility as well as reach. feature required in the application. As far as user-friendliness is NIndustries from across different sectors have their concerned, the CMSs are very easy to use. CMS products can own websites with detailed portfolios so that marketing as be used and deployed even by those who do not have very good well as broadcasting can be integrated very effectively. programming skills. Web 2.0 applications are quite popular in the global market. A framework can be considered as a model, a structure With Web 2.0, the applications developed are fully dynamic or simply a programming template that provides classes, so that the website can provide customised results or output to events and methods to develop an application. Generally, the client. Traditionally, long term core coding, using different the software framework is a real or conceptual structure of programming or scripting languages like CGI PERL, Python, software intended to serve as a support or guide to build Java, PHP, ASP and many others, has been in vogue. But today something that expands the structure into something useful. excellent applications can be developed within very little The software framework can be seen as a layered structure, time. The major factor behind the implementation of RAD indicating which kind of programs can or should be built and frameworks is re-usability. By making changes to the existing the way they interrelate. code or by merely reusing the applications, development has now become very fast and easy. Content Management Systems (CMSs) The digital repositories and CMSs have a lot of feature- Software frameworks overlap, but both systems are unique in terms of their Software frameworks and content management systems underlying purposes and the functions they fulfill. (CMS) are entirely different concepts. In the case of CMSs, the A CMS for developing Web applications is an integrated reusable modules, plugins and related components are provided application that is used to create, deploy, manage and store with the source code and all that is required is to only plug in or content on Web pages. The Web content includes plain or plug out. The frameworks need to be installed and imported on formatted text, embedded graphics in multiple formats, the host machine and then the functions are called. This means photos, video, audio as well as the code that can be third party that the framework with different classes and functions needs to APIs for interaction with the user.

52 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 53 Insight Developers

PHP-based open source frameworks

• Laravel • Prado • Phalcon • Seagull • Symfony • Yii • CodeIgniter • CakePHP

Digital repositories An institutional repository refers to the online archive or library for collecting, preserving and disseminating digital copies of the intellectual output of the institution, particularly in the field of research. Figure 1: Joomla extensions For any academic institution like a university, it also includes digital content such as academic journal articles. It • Sites offering government applications covers both pre-prints and post-prints, articles undergoing • Websites of small businesses and NGOs peer review, as well as digital versions of theses and • Community-based portals dissertations. It even includes some other digital assets • School and church websites • Personal or family home pages PHP-based open source CMSs Joomla’s user base includes: • The military - http://www.militaryadvice.org/ • Joomla • Typo3 • US Army Corps of Engineers - Country: http://www.spl. • Drupal • Mambo usace.army.mil/cms/index. • WordPress • MTV Networks Quizilla (social networking) - http://www. quizilla.com generated in an institution such as administrative documents, • New Hampshire National Guard - https://www.nh.ngb. course notes or learning objectives. Depositing material in army.mil/ an institutional repository is sometimes mandated by some • United Nations Regional Information Centre - http://www. institutions. unric.org • IHOP (a restaurant chain) - http://www.ihop.com Joomla CMS • Harvard University - http://gsas.harvard.edu Joomla is an award-winning open source CMS written in … and many others PHP. It enables the building of websites and powerful online The essential features of Joomla are: applications. Many aspects, including its user-friendliness and • User management extensible nature, makes Joomla the most popular Web-based • Media manager software development CMS. Joomla is built on the model– • Language manager view–controller (MVC) Web application framework, which • Banner management can be used independent of the CMS. • Contact management Joomla CMS can store data in a MySQL, MS SQL or • Polls PostgreSQL database, and includes features like page caching, • Search RSS feeds, printable versions of pages, news flashes, blogs, • Web link management polls, search and support for language internationalisation. • Content management According to reports by Market Wire, New York, as of • Syndication and newsfeed management February 2014, Joomla has been downloaded over 50 million • Menu manager times. Over 7,700 free and commercial extensions are available • Template management from the official Joomla Extension Directory and more are • Integrated help system available from other sources. It is supposedly the second most • System features used CMS on the Internet after WordPress. Many websites • Web services provide information on installing and maintaining Joomla sites. • Powerful extensibility Joomla is used across the globe to power websites of all types and sizes: Joomla extensions • Corporate websites or portals Joomla extensions are used to extend the functionality of • Corporate intranets and extranets Joomla-based Web applications. The Joomla extensions for • Online magazines, newspapers and publications multiple categories and services can be downloaded from • E-commerce and online reservation sites http://extensions.joomla.org.

52 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 53 Developers Insight

Configuration Database Overview

Database Configuration

Joomla!® is free software released under the GNU General Public License. Database Type* MySQLi This is probably "MySQLi" Configuration Database Overview Host Name* localhost English (United States) Select Language This is usuaally "localhost"

Main Configuration Username* Either something as "root or a username given by the host Site Name * My now Joomla Website! Admin Email* Pasword Enter the name of your Joomla! site. Enter an email address. This will be the email address of the Web site Super For site security using a password for the database account is manadatory Administrator. Description This is my new Joomla site and it is going to be great! Database Name* Admin Username* Some hosts allow only a certain DB name per site. Use table prefix in this case for district joomla! sites. Enter a description of the overall Web site You may change the default username that is to be used by search engines. admin. Table Prefix* Generally, a maximum of 20 words is optimal. Admin Password* Choose a table prefix or use the randomly generated ideally, three or four characters long, contain only alphanumeric characters, and MUST end in an underscore. Make sure that the Set the password for your Super prefix chosen is not used by other table Administrator account and confirm it in the field below. Old Database Backup Remove Process* Confirm Admin Any existing backup tables from former joomla! installations will be replaced Password*

Site Offline No Yes Figure 3: Database configuration panel for setting up Joomla

Set the site fronted offline when installation is completed. The site can be set online later on through the Global Configuration After all the necessary information has been filled in at all Figure 2: Creating a MySQL user in a Web hosting panel stages, press the Next button to proceed. You will be forwarded to the last page of the installation process. On this page, specify Installing and working with Joomla if you want any sample data installed on your server. For Joomla installation on a Web server, whether local or hosted, The second part of the page will show the pre-installation we need to download the Joomla installation package, which checks. The Web hosting servers will check that all Joomla ought to be done from the official website, Joomla.org. If Joomla requirements and prerequisites have been met and you will is downloaded from websites other than the official one, there are see a green check after each line. risks of viruses or malicious code in the set-up files. Finally, click the Install button to start the actual Joomla Once you click the Download button for the latest stable installation. In a few moments, you will be redirected to the last Joomla version, the installation package will be saved to the local screen of the Joomla Web Installer. On the last screen of the hard disk. Extract it so that it can be made ready for deployment. installation process, press the Remove installation folder button. Now, at this instant, upload the extracted files and folders This is required for security reasons; otherwise, every time, the to the Web server. The easiest and safest method to upload the installation will restart. Joomla is now ready to be used. Joomla installation files is via FTP. If Joomla is required to be installed live on a specific Creating articles and linking them with the menu domain, upload the extracted files to the public_html folder After installation, the administrator panel to control the on the online file manager of the domain. If access to Joomla Joomla website is displayed. Here, different modules, plugins is needed on a sub-folder of any domain (www.mydomain. and components, along with the HTML contents, can be com/myjoomla) it should be uploaded to the appropriate sub- added or modified. directory (public_html/myjoomla/). After this step, create a blank MySQL database and assign WordPress CMS a user to it with full permissions. A blank database is created WordPress is another free and open source blogging CMS because Joomla will automatically create the tables inside tool based on PHP and MySQL. The features of WordPress that database. Once you have created your MySQL database include a specialised plugin architecture with a template and user, save the database name, database user name and system. WordPress is the most popular blogging system in use password just created because, during Joomla installation, you on the Web, used by more than 60 million websites. It was will be asked for these credentials. initially released in 2003 with the objective of providing an After uploading the installation files, open the Web easy-to-use CMS for multiple domains. browser and navigate to the main domain (http://www. The installation steps for all CMSs are almost the same. mysite.com), or to the appropriate sub-domain (http://www. The compressed file is extracted and deployed on the public_ mysite.com/joomla), depending upon the location the Joomla html folder of the Web server. In the same way, a blank installation package is uploaded to. Once done, the first screen database is created and the credentials are placed during the of the Joomla Web Installer will open up. installation steps. Once you fill in all the required fields, press the Next button According to the official declaration of WordPress, this to proceed with the installation. On the next screen, you will have CMS powers more than 17 per cent of the Web and the figure to enter the necessary information for your MySQL database. is rising every day.

54 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 55 Insight Developers

The salient features of WordPress are: • Simplicity • Flexibility • Ease of publishing • Publishing tools • User management • Media management • Full standards compliance Figure 4: Administrator login for Joomla Figure 5: WYSIWYG editor for creating articles • Easy theme system • Can be extended with plugins • Nicholls State University • Built-in comments • Milwaukee School of Engineering • Search engine optimised ….and many others • Multi-lingual • Easy installation and upgrades By: Dr Gaurav Kumar • Importers • Strong community of troubleshooters The author is the MD of Magma Research & Consultancy Pvt Ltd, Worldwide users of WordPress include: Ambala. He is associated with a number of academic institutes, where he delivers lectures and conducts technical workshops • FIU College of Engineering and Computing on the latest technologies and tools. He can be contacted at • MTV Newsroom [email protected]. • Sony Music

To be continued from page.... 51

static int __init full_init(void) [root]# insmod ./full_driver.ko { major = register_chrdev(0, name, &full_ops); [root]# grep "full_driver" /proc/devices if (major < 0) { 248 full_driver printk(KERN_INFO "Failed to register driver."); return -1; [root]# mknod /dev/full_driver c 248 0 } [root]# echo "Hello" > /dev/full_driver return 0; -bash: echo: write error: No space left on device } If you want to learn more about GNU/Linux device static void __exit full_exit(void) drivers, the Linux kernel's source code is the best place to do { so. You can browse the kernel's source code from http://lxr. unregister_chrdev(major, name); free-electrons.com/. You can also download the latest source } code from https://www.kernel.org/. Additionally, there are a few good books available in the market like ‘Linux Kernel module_init(full_init); Development' (3rd Edition) by Robert Love, and ‘Linux module_exit(full_exit); Device Drivers' (3rd Edition) which is a free book. You can download it from http://lwn.net/Kernel/LDD3/. These books MODULE_AUTHOR("Narendra Kangralkar."); also explain kernel debugging tools and techniques. MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Full driver"); By: Narendra Kangralkar

Let us compile and insert the module. The author is a FOSS enthusiast and loves exploring anything related to open source. He can be reached at [email protected] [mickey]$ make

54 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 55 Developers Let's Try Compile a GPIO Control Application and Test It On the Raspberry Pi GPIO is the acronym for General Purpose (I/O). The role played by these drivers is to handle I/O requests to read or write to groups of GPIO pins. Let’s try and compile a GPIO driver.

his article goes deep into what really goes on inside ƒƒ Jumper (female-to-female) an OS while managing and controlling the hardware. ƒƒ SD card (with bootable Raspbian image) TThe OS hides all the complexities, carries out all the Here's a quick overview of what device drivers are. As the operations and gives end users their requirements through the name suggests, they are pieces of code that drive your device. UI (User Interface). GPIO can be considered as the simplest One can even consider them a part of the OS (in this case, of all the peripherals to work on any board. A small GPIO Linux) or a mediator between your hardware and UI. driver would be the best medium to explain what goes on A basic understanding of how device drivers actually under the hood. work is required; so do learn more about that in case you need A good embedded systems engineer should, at the very to. Let’s move forward to the GPIO driver assuming that one least, be well versed in the C language. Even if the following knows the basics of device drivers (like inserting/removing demonstration can't be replicated (due to the unavailability of the driver from the kernel, probe functionality, etc). hardware or software resources), a careful read through this When you insert (insmod) this driver, it will register itself article will give readers an idea of the underlying processes. as a platform driver with the OS. The platform device is also registered in the same driver. Contrary to this, registering Prerequisites to perform this experiment the platform device in the board file is a good practice. A ƒƒ C language (high priority) peripheral can be termed a platform device if it is a part of ƒƒ Raspberry Pi board (any model) the SoC (system on chip). Once the driver is inserted, the ƒƒ BCM2835-ARM-peripherals datasheet (just Google for it!) registration (platform device and platform driver) takes place,

56 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 57 Let's Try Developers

after which the probe function gets called.

User Applications Generic information User Probe in the driver gets called whenever a device's (already Space GNU C Library (glibc) registered) name matches with the name of your platform driver (here, it is bcm-gpio). The second major functionality is ioctl which acts as a bridge between the application GNU/ System Call Interface space and your driver. In technical terms, whenever your Linux application invokes this (ioctl) system call, the call will be Kernel Kernel Space routed to this function of your driver. Once the call from the application is in your driver, you can process or provide data Architecture-Dependent Kernel Code inside the driver and can respond to the application. The SoC datasheet, i.e., BCM2835-ARM-Peripherals, Hardware Platform plays a pivotal role in building up this driver. It consists of all the information pertaining to the peripherals supported by Figure 1: System layout your SoC. It exposes all the registers relevant to a particular peripheral, which is where the key is. Once you know what registers of a peripheral are to be configured, half the job is done. Be cautious about which address has to be used to access these peripherals. Types of addressing modes There are three kinds of addressing modes - virtual addressing, physical addressing and system bus addressing. To learn the details, turn to Page 6 of the datasheet. The macro __io_address implemented in the probe function of the driver returns the virtual address of the physical address passed as an argument. For GPIO, the physical address is 0x20200000(0x20000000 + 0x200000), where 0x20000000 is the base address and 0x200000 is the peripheral offset. Turn to Page 5 of the datasheet for more details. Any guesses on which address the macro __io_ Figure 2: Console address would return? The address returned by this macro can then be used for accessing (reading or writing) the concerned ƒƒ Local compilation on the target board peripheral registers. In the first method, one needs to have certain packages The GPIO control application is analogous to a simple downloaded. These are: C program with an additional ioctl call. This call is capable ƒƒ ARM cross-compiler of passing data from the application layer to the driver layer ƒƒ Raspbian kernel source (the kernel version must match with an appropriate command. I have restricted the use of with the one running on your Pi; otherwise, the driver will other GPIOs as they are not exposed to headers like others. not load onto the OS due to the version mismatch) So, modify the application as per your requirements. More In the second method, one needs to install certain information is available on this peripheral from Page 89 of packages on Pi. the datasheet. In this code, I have just added functionality for ƒƒ Go to the following link and follow the steps indicated: setting or clearing a GPIO. Another interesting feature is that http://stackoverflow.com/questions/20167411/how-to- by configuring the appropriate registers, you can configure compile-a-kernel-module-for-raspberry-pi GPIOs as interrupt pins. So whenever a pulse is routed to ƒƒ Or, follow the third answer at this link, the starting line of that pin, the processor, i.e., ARM, is interrupted and the which says, "Here are the steps I used to build the ‘Hello corresponding handler registered for that interrupt is invoked World’ kernel module on Raspbian." to handle and process it. This interesting aspect will be taken I went ahead with the second method as it was more up in later articles. straightforward. Compilation of the GPIO device driver Testing on your Raspberry Pi There are two ways in which you can compile your driver. Boot up your Raspberry Pi using minicom and you will see ƒƒ Cross compilation on the host PC the console that resembles mine (Figure 2).

56 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 57 Developers Let's Try

Figure 3: dmesg output

ƒƒ Run ‘sudo dmesg –C’. (This command would clean up all the kernel boot print logs.) Figure 5: Output showing GPIO 24=0 ƒƒ Run ‘sudo make’. (This command would compile GPIO driver. Do this only for the second method.) ƒƒ Run ‘sudo insmod gpio_driver.ko’. (This command inserts the driver into the OS.) ƒƒ Run ‘dmesg’. You can see the prints from the GPIO driver and the major number allocated to it, as shown in Figure 3. (The major number plays a unique role in identifying a specific driver with whom the process from the application space wants to communicate Figure 4: R-pi GPIO Figure 6: Output showing GPIO 25=1 with, whereas the minor number is used to recognise hardware.) This command will drive the value of GPIO 24 to 1, ƒƒ Run ‘sudo mknod /dev/bcm-gpio c major-num 0’. (The which in turn will be routed to GPIO 25. To verify the value ‘mknod’ command creates a node in /dev directory, of GPIO 25, run: ‘c’ stands for character device and ‘0’ is the minor number.) sudo ./app -n 25 -g 1 ƒƒ Run ‘sudo gcc gpio_app.c -o gpio_app’. (Compile the GPIO control application.) This will give the output. The output value of GPIO 25 = 1 Now let’s test our GPIO driver and application. (see Figure 6). To verify whether our driver is indeed communicating One can also connect any external device or a simple LED with GPIO, short pins 25 and 24 (one can use other (through a resistor) to the GPIO pin and test its output. available pins like 17, 22 and 23 as well but make sure that Arguments passed to the application through the command they aren't mixed up for any other peripheral) using the lines are: female-to-female jumper (Figure 4). The default values of -n : GPIO number both the pins will be 0. To confirm the default values, run -d : GPIO direction (0 - IN or 1 - OUT) the following commands: -v : GPIO value (0 or 1) -s/g : set/get GPIO sudo ./app -n 25 -g 1 The files are: gpio_driver.c : GPIO driver file This will be the output. The output value of GPIO 25 = 0. gpio_app.c : GPIO control application Now run the following command: gpio.h : GPIO header file Makefile : File to compile GPIO driver sudo ./app -n 24 -g 1 After conducting this experiment, some curious folk may have questions like: This will again be the output. The output value of GPIO ƒƒ Why does one have to use virtual addresses to access GPIO? 24 = 0. ƒƒ How does one determine the virtual address from the That’s it. It’s verified (see Figure 5). Now, as the GPIO physical address? pins are shorted, if we output 1 to 24 then it would be the We will discuss the answers to these in later articles. input value of 25 and vice versa. To test this, run: By: Sumeet Jain The author works at eInfochips as an embedded systems sudo ./app -n 24 -d 1 -v 1 -s 1 engineer. You can reach him at [email protected]

58 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | PB

Admin How To

LOAD BALANCING USING POUND SERVER 192.168.10.31 192.168.10.32 ...ApacheWebServer1...... ApacheWebServer2...

WEB SERVER 1 USER 1, Figure 3: Custom web page of USER2, 192.168.10.31 Figure 4: Custom web page of USER3 ...etc Apache Web Server1 Apache Web Server2 HTTP TRAFFIC Pound server 192.168.10.30 Installation and configuration of Pound gateway server WEB SERVER 2 First, ensure YUM is up and running: 192.168.10.32

[root@poundgateway ~]# ps -ef | grep yum

NOTE - POUND Server performs the require load balancing of the Web Servers. root 2050 1998 0 13:30 pts/1 00:00:00 grep yum [root@poundgateway ~]# Figure 1: Load balancing using the Pound server [root@poundgateway ~]# yum clean all Loaded plugins: product-id, refresh-, subscription- manager Updating Red Hat repositories. Cleaning repos: Cleaning up Everything [root@poundgateway ~]#

Figure 2: Default page [root@poundgateway ~]# yum update all Loaded plugins: product-id, refresh-packagekit, subscription- [root@apachewebsever1 Packages]# manager Updating Red Hat repositories. Start the service: Setting up Update Process No Match for argument: all [root@apachewebsever1 ~]# service httpd start No package all available. Starting httpd: No Packages marked for Update [ OK ] [root@poundgateway ~]# [root@apachewebsever1 ~]# Then, check the default directory of YUM: Start the service at boot time: [root@poundgateway ~]# cd /etc/yum.repos.d/ [root@apachewebsever1 ~]# chkconfig httpd on [root@poundgateway yum.repos.d]# [root@apachewebsever1 ~]# [root@poundgateway yum.repos.d]# ll [root@apachewebsever1 ~]# chkconfig --list httpd total 8 httpd 0:off 1:off 2:on 3:on 4:on 5:on -rw-r--r-- 1 root root 67 Jul 27 13:30 redhat.repo 6:off -rw-r--r--. 1 root root 529 Apr 27 2011 rhel- [root@apachewebsever1 ~]# source.repo [root@poundgateway yum.repos.d]# The directory location of Apache HTTP Service is /etc/ httpd/. Figure 2 gives the default test page for Apache Web By default, the repo file ‘rhel-source.repo’ is disabled. To Server on Red Hat Enterprise Linux. enable, edit the file ‘rhel-source.repo’ and change the value Now, let’s create a Web page index.html at /var/www/html. Restart Apache Web Service to bring the changes into effect. enabled = 1 The index.html Web page will be displayed (Figure 3). Repeat the above steps for Web Server2 or ApacheWebServer2.linuxrocks.org, except for the following: ƒƒ Set the IP address to 192.168.10.32 enable = 0 ƒƒ The contents of the custom Web page ‘index.html’ should be ‘…ApacheWebServer2…’ as shown in Figure 4. For now you can leave this repository disabled.

60 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 61 How To Admin

Now, download the ‘epel-release-6-8.noarch.rpm’ package added repo files. No changes are made in epel.repo and and install it. epel-testing.repo. Move the default redhat.repo and rhel- source.repo to the backup location. Now, connect the server Important notes on EPEL to the Internet and, using the yum utility, install Pound: 1. EPEL stands for Extra Packages for Enterprise Linux. 2. EPEL is not a part of RHEL but provides a lot of open [root@PoundGateway ~]# yum install Pound* source packages for major Linux distributions. 3. EPEL packages are maintained by the Fedora team and This will install Pound, Pound-debuginfo and will also are fully open source, with no core duplicate packages and install required dependencies along with it. no compatibility issues. They are to be installed using the To verify Pound’s installation, type: YUM utility. The link to download the EPEL release for RHEL 6 (32-bit) [root@PoundGateway ~]# rpm -qa Pound is: http://download.fedoraproject.org/pub/epel/6/i386/epel- Pound-2.6-2.el6.i686 release-6-8.noarch.rpm [root@PoundGateway ~]# And for 64 bit, it is: http://download.fedoraproject.org/pub/epel/6/x86_64/epel- The location of the Pound configuration file is /etc/ release-6-8.noarch.rpm pound.cfg Here, epel-release-6-8.noarch.rpm is kept at /opt: You can view the default Pound configuration file by Go to the /opt directory and change the permission of using the command given below’: the files: [root@PoundGateway ~]# cat /etc/pound.cfg [root@poundgateway opt]# chmod -R 755 epel-release-6-8.noarch. rpm Make the changes to the Pound configuration file as shown [root@poundgateway opt]# in the code snippet given below: ƒƒ We will comment the section related to “ListenHTTPS” Now, install ‘epel-release-6-8.noarch.rpm’: as we do not need HTTPS for now. ƒƒ Add the IP address 192.168.10.30 under the [root@poundgateway opt]# rpm -ivh --aid --force epel- ‘ListenHTTP’ section. release-6-8.noarch.rpm ƒƒ Add the IP address 192.168.10.31 and 192.168.10.32 warning: epel-release-6-8.noarch.rpm: Header V3 RSA/SHA256 with Port 80 under ‘Service Backend Section’, where Signature, key ID 0608b895: NOKEY [192.168.10.30] is for the Pound server; [192.168.10.31] Preparing... ################################### for Web Server1 and [192.168.10.32 ] for Web Server2. ######## [100%] The edited Pound configuration file is: 1:epel-release ################################### ######## [100%] [root@PoundGateway ~]# cat /etc/pound.cfg [root@poundgateway opt]# # # Default pound.cfg epel-release-6-8.noarch.rpm installs the repo files necessary # to download the Pound package: # Pound listens on port 80 for HTTP and port 443 for HTTPS # and distributes requests to 2 backends running on [root@poundgateway ~]# cd /etc/yum.repos.d/ localhost. [root@poundgateway yum.repos.d]# ll # see pound(8) for configuration directives. total 16 # You can enable/disable backends with poundctl(8). -rw-r--r-- 1 root root 957 Nov 4 2012 epel. # repo -rw-r--r-- 1 root root 1056 Nov 4 2012 epel- User "pound" testing.repo Group "pound" -rw-r--r-- 1 root root 67 Jul 27 13:30 redhat.repo Control "/var/lib/pound/pound.cfg" -rw-r--r--. 1 root root 529 Apr 27 2011 rhel- source.repo ListenHTTP [root@poundgateway yum.repos.d]# Address 192.168.10.30 Port 80 As observed, epel.repo and epel-testing.repo are the new End

60 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 61 Admin How To

#ListenHTTPS To configure the service to be started at boot time, type: # Address 0.0.0.0 # Port 443 [root@PoundGateway ~]# chkconfig pound on # Cert "/etc/pki/tls/certs/pound.pem" [root@PoundGateway ~]# chkconfig –list pound #End pound 0:off 1:off 2:on 3:on 4:on 5:on 6:off

Service [root@PoundGateway ~]# BackEnd Address 192.168.10.31 Observation Port 80 Now open a Web browser and access the URL End http://192.168.10.30. It displays the Web page from Web BackEnd Server1–ApacheWebServer1.linuxrocks.org Address 192.168.10.32 Refresh the page, and it will display the Web page from Port 80 Web Server2–ApacheWebServer2.linuxrocks.org End Keep refreshing the Web page; it will flip from Web End Server1 to Web Server2, back and forth. We have now [root@PoundGateway ~]# configured a system where the load on the Web server is being balanced between two physical servers. Now, start the Pound service: By: Arindam Mitra [root@PoundGateway ~]# service pound start The author can be reached at [email protected] or Starting Pound: starting... [OK] [email protected] [root@PoundGateway ~]#

Customer Feedback Form Open Source For You

None

OSFY?

You can mail us at [email protected] You can send this form to ‘The Editor’, OSFY, D-87/1, Okhla Industrial Area, Phase-1, New Delhi-20. Phone No. 011-26810601/02/03, Fax: 011-26817563

62 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | PB How To Admin Why We Need to Handle Bounced Emails Bounced emails are the bane of marketing campaigns and mailing lists. In this article, the author explains the nature of bounce messages and describes how to handle them.

bounces cannot be used to determine the status of a failing recipient, and therefore need to be sorted out effectively from our bounce processing. To understand this better, consider a sender alice@ example.com, sending an email to bob@somewhere. com. She mistyped the recipient’s address as bub@ somewhere.com. The email message will have a default ikipedia defines a bounce email as a system- envelope sender, set by the local MTA running there generated failed delivery status notification (mta.example.com), or by the PHP script to alice@ W(DSN) or a non-delivery report (NDR), which example.com. Now, mta.example.com looks up the informs the original sender about a delivery problem. When DNS mx records for somewhere.com, chooses a host that happens, the original email is said to have bounced. from that list, gets its IP address and tries to connect Broadly, bounces are categorised into two types: to the MTA running on somewhere.com, port 25 via ƒƒ A hard/permanent bounce: This indicates that there an SMTP connection. Now, the MTA of somewhere. exists a permanent reason for the email not to get com is in trouble as it can't find a user receiver in its delivered. These are valid bounces, and can be due to the local user table. The mta.somewhere.com responds to non-existence of the email address, an invalid domain example.com with an SMTP failure code, stating that name (DNS lookup failure), or the email provider the user lookup failed (Code: 550). It’s time for mta. blacklisting the sender/recipient email address. example.com to generate a bounce email to the address ƒƒ A soft/temporary bounce: This can occur due to of the return-path email header (the envelope sender), various reasons at the sender or recipient level. It with a message that the email to alice@somewhere. can evolve due to a network failure, the recipient com failed. That's a bounce email. Properly maintained mailbox being full (quota-exceeded), the recipient mailing lists will have every email passing through having turned on a ‘vacation reply’, the local Message them branded with the generic email ID, say mails@ Transfer Agent (MTA) not responding or being badly somewhere.com as the envelope sender, and bounces to configured, and a whole lot of other reasons. Such that will be wasted if left unhandled.

PB | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 63 Admin How To

VERP (Variable Envelope Return-Path) attach it along with the hash. In the above example, you will have noticed that the You need to edit your email headers to generate the delivery failure message was sent back to the address of custom return-path, and make sure you pass it as the fifth the Return-Path header in the original email. If there is argument to the php::mail() function to tell your exim MTA to a key to handle the bounced emails, it comes from the set it as the default envelope sender. Return-Path header. The idea of VERP is to safely encode the recipient details, $to = “[email protected]”; too, somehow in the return-path so that we can parse the $from = “[email protected]”; received bounce effectively and extract the failing recipient $subject = “This is the message subject “; from it. We specifically use the Return-Path header, as that’s $body = ‘This is the message body’; the only header that is not going to get tampered with by the intervention of a number of MTAs. /** Altering the return path */ Typically, an email from Alice to Bob in the above $alteredReturnPath = self::generateVERPAddress( $to ); example will have headers like the following: $headers[ ‘Return-Path’] = $alteredReturnPath; $envelopeSender= ‘ -f ‘. $alteredReturnPath; From: [email protected] To: [email protected] mail( $to, $subject, $body, $headers, $envelopeSender ); Return-Path: [email protected] /** We need to produce a return address of the form - Now, we create a custom return path header by encoding * bounces-{ prefix }- {hash(prefix) }@sender_domain, where the ‘To’ address as a combination of prefix-delim-hash. The prefix can be hash can be generated by the PHP hmac functions, so that the * string_ replaced(to_address ) new email headers become something like what follows: */ public generateVERPAddress( $to ) { From: [email protected] global $hashAlgorithm = ‘md5’; To: [email protected] global $hashSecretKey = ‘myKey’; Return-Path: bounce-bob.somewher.com-{encode ( bob@somewher. $emailDomain = ‘example.com’; com ) }@example.com $addressPrefix = str_replace( '@', '.', $to ); $verpAddress = hash_hmac( $hashAlgorithm , $to, Now, the bounces will get directed to our new return-path $hashSecretKey ); and can be handled to extract the failing recipient. $returnPath = bounes. ‘-’.$addressPrefix.’-’. $verpAddress. ‘@’. $emailDomain; Generating a VERP address return $returnPath; The task now is to generate a secure return-path, which is not } bulky, and cannot be mimicked by an attacker. A very simple VERP address for a mail to [email protected] will be: Including security features is yet another concern and can be done effectively by adding the current timestamp value (in [email protected] UNIX time) in the VERP prefix. This will make it easy for the bounce processor to decode the email delivery time and Since it can be easily exploited by an attacker, we need to add additional protection by brute-forcing the hash. Decoding also include a hash generated with a secret key, along with the and comparing the value of the timestamp with the current address. Please note that the secret key is only visible to the timestamp will also help to understand how old the bounce is. sender and in no way to the receiver or an attacker. Therefore, a more secure VERP address will look like Therefore, a standard VERP address will be of the form: what follows:

bounces-{ prefix }-{hash(prefix,secretkey) }@sender_domain bounces-{ to_address }-{ delivery_timestamp }-{ encode ( to_ address-delivery & timestamp ), secretKey }@somewhere.com PHP has its own hash-generating functions that can make things easier. Since PHP’s hmacs cannot be decoded, but only The current timestamp can be generated in PHP by: compared, the idea will be to adjust the recipient email ID in the prefix part of the VERP address along with its hash. On $current_timestamp = time(); receipt, the prefix and the hash can be compared to validate the integrity of the bounce. There’s still work to do before the email is sent, as the We will string replace the ‘@’ in the recipient email ID to local MTA at example.com may try to set its own custom

64 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 65 How To Admin

return-path for messages it transmits. In the example below, path to the To: header of bounce we adjust the exim configuration on the MTA to override The email can be made use of in the handleBounce.php by this behaviour. using a simple POST request.

$ sudo nano /etc/exim4/exim4.conf $email = $_POST[ ‘email’ ];

# Do not remove Return Path header return_path_remove = false Decoding the failing recipient from the bounce email # Remove the field errors_to from the current router Now that the mail is successfully in the PHP script, our task configuration. will be to extract the failing recipient from the encoded email # This will enable exim to use the fifth param of headers. Thanks to exim configurations like envelope_to_add php::mail() prefixed by -f to be set as the default # in the pipe transport (above), the VERP address gets pasted to envelope sender the To header of the bounce email, and that’s the place to look for the failing recipient. Every email ID will correspond to a user_id field in a Some common regex functions to extract the headers are: standard user database, and this can be used instead of an email ID to generate a tidy and easy to look up VERP hash. function extractHeaders( $email ) { $bounceHeaders = array(); Redirect your bounces to a PHP bounce- $lineBreaks = explode( "\n", $email ); handling script foreach ( $lineBreaks as $lineBreak ) { We now have a VERP address being generated on every if ( preg_match( "/^To: (.*)/", $lineBreak , $toMatch ) sent email, and it will have all the necessary information ) { we need securely embedded in it. The remaining part of our $bounceHeaders[ 'to' ] = $toMatch[1]; task is to capture and validate the bounces, which would } require redirecting the bounces to a processing PHP script. if ( preg_match( "/^Subject: (.*)/", $lineBreak , By default, every bounce message will reach all the way $subjectMatch ) ) { back till the MTA that sent it, say mx.example.com, as its $bounceHeaders[ 'subject' ] = return-path gets set to [email protected], with or without $subjectMatch[1]; VERP. The advantage of using VERP is that we will have } the encoded failing address, too, somewhere in the bounce. if ( preg_match( "/^Date: (.*)/", $lineBreak , To get that out from the bounce, we can HTTP POST $dateMatch ) ) { the email via curl to the bounce processing script, say $bounceHeaders[ 'date' ] = $dateMatch[1]; localhost/handleBounce.php using an exim pipe transport, } as follows: if ( trim( $lineBreak ) == "" ) { // Empty line denotes that the header part is $sudo nano /etc/exim4/exim4.conf finished break; # suppose you have a recieve_all router that will accept } all the emails to your domain. } # this can be the system_alias router too return $bounceHeaders; recieve_all: } driver = accept transport = pipe_transport After extracting the headers, we need to decode the # Edit the pipe_transport original-failed-recipient email ID from the VERP hashed pipe_transport: $bounceHeader[‘to’], which involves more or less the driver = pipe reverse of what we did earlier. This would help us validate command = /usr/bin/curl http://localhost/handleBounce..php the bounced email too. --data-urlencode "email@-" group = nogroup /** return_path_add # adds Return-Path header for *Considering the recieved $heders[ ‘to’ ] is of the form incoming mail. * bounces-{ to_address }-{ delivery_timestamp }-{ encode ( delivery_date_add # adds the bounce timestamp to_address-delivery & timestamp ), * secretKey }@ envelope_to_add # copies the return somewhere.com

64 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 65 Admin How To

*/ $hashedTo = $bounceHeaders[ ‘to’ ]; // This will hold the $hashedTo = $headers[ ‘to’ ]’; VERP address $failedRecipient = self::extractToAddress( $hashedTo ); $to = self::extractToAddress( $hashedTo ); $con = mysqli_connect( "database_server", "dbuser", "dbpass", function extractToAddress( $hashedTo ) { "databaseName" ); $timeNow = time(); mysqli_query( $con, "INSERT INTO bounceRecords( failedRecipient, bounceTimestamp, failureReason )VALUES ( // This will help us get the address part of address@ $failedRecipient, $bounceTimestamp , $failureReason); domain preg_match( '~(.*?)@~', $hashedTo, $hashedSlice ); mysqlI_close( $con );

// This will help us cut the address part with the Simple tests to differentiate between a symbol ‘ - ‘ permanent and temporary bounce $hashedAddressPart = explode( '-', $hashedSlice1] ); One of the greatest challenges while writing a bounce processor is to make sure it handles only the right bounces or // Now we have the prefix in the permanent ones. A bounce processing script that reacts to $ h a s h e d A d d r e s s P a r t [ 0 - 2 ] a n d t h e h a s h i n every single bounce can lead to mass unsubscription of users $hashedAddressPart[3] from the mailing list and a lot of havoc. Exim helps us here in $verpPrefix = $hashedAddressPart [0]. '-'. a great way by including an additional ‘X-Failed-Recipients:’ $hashedAddressPart 1]. '-'. hashedAddressPart [2]; header to a permanent bounce email. This key can be checked for in the regex function we wrote earlier, and action can be // Extracting the bounce time. taken only if it exists. $bounceTime = $hashedAddressPart[ 2 ]; /** // Valid time for a bounce to happen. The values can be * Check if the bounce corresponds to a permanent failure subtracted to find out the time in between and even used to * can be added to the extractHeaders() function above set an accept time, say 3 days. */ if ( $bounceTime < $timeNow ) { function isPermanentFailure( $email ) { if ( hash_hmac( $hashAlgorithm, $verpPrefix , $lineBreaks = explode( "\n", $email ); $hashSecretKey ) === $hashedAddressPart[3] ) { foreach ( $lineBreaks as $lineBreak ) { // Bounce is valid, as if ( preg_match( "/^X-Failed-Recipients: (.*)/", $lineBreak, the comparisons return true. $permanentFailMatch ) ) { $to = string_replace( $bounceHeaders[ 'x-failed-recipients' ] = ‘.’, ‘@’, $verpPrefix[1] ); $permanentFailMatch; return $to; return true; } } } } } Even today, we have a number of large organisations that send more than 100 emails every minute and still Taking action on the failing recipient have all bounces directed to /dev/null. This results in far Now that you have got the failing recipient, the task would too many emails being sent to undeliverable addresses be to record his bounce history and take relevant action. and eventually leads to frequent blacklisting of the A recommended approach would be to maintain a bounce organisations’ mail server by popular providers like records table in the database, which would store the failed Gmail, Hotmail, etc. recipient, bounce-timestamp and failure reason. This can be If bounces are directed to an IMAP maildir, the regex inserted into the database on every bounce processed, and can functions won't be necessary, as the PHP IMAP library can be as simple as: parse the headers readily for you.

/** extractHeaders is defined above */ By: Tony Thomas $bounceHeaders = self::extractHeaders( $email ); The author is currently doing his Google SoC project for Wikimedia $failureReason = $bounceHeaders[ ‘subject’ ]; on handling email bounces effectively. You can contact the author at [email protected]. Github: github.com/tonythomas01 $bounceTimestamp = $bounceHeaders[ ‘date’ ];

66 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | PB How To Admin Boost the Performance of CloudStack with Varnish In this article, the author demonstrates how the performance of CloudStack can dramatically improve by using Varnish. He does so by drawing upon his practical experience with administering SaaS servers at his own firm.

he current cloud inventory for one of the SaaS A word about Varnish Tapplications at our firm is as follows: Varnish is a Web application accelerator or ‘reverse proxy’. • Web server: Centos 6.4 + NGINX + MySql + PHP + Drupal It’s installed in front of the Web server to handle HTTP • Mail server: Centos 6.4 + Postfix + Dovecot + Squirrelmail requests. This way, it speeds up the site and improves the A quick test on Pingdom showed a load time of 3.92 performance significantly. In some cases, it can improve the seconds for a page size of 2.9MB with 105 requests. performance of a site by 300 to 1000 times. Tests using Apache Bench ab -c1 -n500 http://www. It does this by caching the Web pages and when visitors bookingwire.co.uk/ yielded almost the same figures—a mean come to the site, Varnish serves the cached pages rather than response time of 2.52 seconds. requesting the Web server for it. Thus the load on the Web We wanted to improve the page load times by caching the server reduces. This method improves the site’s performance content upstream, scaling the site to handle much greater http and scalability. It can also act as a failsafe method if the Web workloads, and implementing a failsafe mechanism. server goes down because Varnish will continue to serve the The first step was to handle all incoming http requests cached pages in the absence of the Web server. from anonymous users that were loading our Web server. With that said, let’s begin by installing Varnish on a VPS, Since anonymous users are served content that seldom and then connect it to a single NGINX Web server. Then let’s changes, we wanted to prevent these requests add another NGINX Web server so that we can implement from reaching the Web server so that a failsafe mechanism. This will accomplish the its resources would be available performance goals that we stated. So let’s get to handle the requests from started. For the rest of the article, let’s authenticated users. Varnish assume that you are using the Centos was our first choice to 6.4 OS. However, we have provided handle this. information for Ubuntu users Our next concern was to wherever we felt it was necessary. find a mechanism to handle the SSL requests mainly Enable the required on the sign-up pages, repositories where we had interfaces First enable the appropriate to Paypal. Our aim was repositories. For Centos, Varnish to include a second Web is available from the EPEL server that handled a repository. Add this repository to portion of the load, your repos list, but before you do so, and we wanted to you’ll need to import the GPG keys. So configure Varnish to open a terminal and enter the following distribute http traffic commands: using a round-robin mechanism between these [root@bookingwire sridhar]#wget https:// two servers. Subsequently, fedoraproject.org/static/0608B895.txt we planned on configuring [root@bookingwire sridhar]#mv 0608B895.txt / Varnish in such a way that etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 even if the Web servers [root@bookingwire sridhar]#rpm --import / were down, the system would etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 continue to serve pages. During the course of [root@bookingwire sridhar]#rpm -qa this exercise we documented our experiences gpg* and that’s what you’re reading about here. gpg-pubkey-c105b9de-4e0fd3a3

PB | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 67 Admin How To

Full Page Test DNS Health Ping and Traccroute Sign up [root@bookingwire sridhar]# sudo apt-get install varnish

Pingdom Website Speed Test Enter a URL to test the load time of that page, analyze it and find bottlenecks

Test Now After a few seconds, Varnish will be installed. Let’s verify the installation before we go further. In the terminal, enter the

bookingwire.co.uk Tested from on May 15 at 15:29:23 following command— the output should contain the lines that

Your website is faster than 41% of all tested websites follow the input command (we have reproduced only a few Download Har Tweet Email lines for the sake of clarity). Waterfall Performance Grade Page Analysis History

[root@bookingwire sridhar]## yum info varnish Installed Packages Figure 1: Pingdom result Name : varnish Arch : i686 Version : 3.0.5 Release : 1.el6 Size : 1.1 M Repo : installed

That looks good; so we can be sure that Varnish is installed. Now, let’s configure Varnish to start up on boot. In case you have to restart your VPS, Varnish will be started automatically.

[root@bookingwire sridhar]#chkconfig --level 345 varnish on Figure 2: Apache Bench result Having done that, let’s now start Varnish: After importing the GPG keys you can enable the repository. [root@bookingwire sridhar]#/etc/init.d/varnish start [root@bookingwire sridhar]#wget http://dl.fedoraproject.org/ pub/epel/6/x86_64/epel-release-6-8.noarch.rpm We have now installed Varnish and it’s up and running. [root@bookingwire sridhar]#rpm -Uhv epel-release-6*.rpm Let’s configure it to cache the pages from our NGINX server.

To verify if the new repositories have been added to the Basic Varnish configuration repo list, run the following command and check the output to The Varnish configuration file is located in /etc/sysconfig/ see if the repository has been added: varnish for Centos and /etc/default/varnish for Ubuntu. Open the file in your terminal using the nano or vim text [root@bookingwire sridhar]#yum repolist editors. Varnish provides us three ways of configuring it. We prefer Option 3. So for our 2GB server, the configuration If you happen to use an Ubuntu VPS, then you should use steps are as shown below (the lines with comments have been the following commands to enable the repositories: stripped off for the sake of clarity):

[root@bookingwire sridhar]# wget http://repo.varnish-cache. NFILES=131072 org/debian/GPG-key.txt MEMLOCK=82000 [root@bookingwire sridhar]# apt-key add GPG-key.txt RELOAD_VCL=1 [root@bookingwire sridhar]# echo “ http://repo.varnish- VARNISH_VCL_CONF=/etc/varnish/default.vcl cache.org/ubuntu/ precise varnish-3.0” | sudo tee -a /etc/ VARNISH_LISTEN_PORT=80 , :443 apt/sources.list VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 [root@bookingwire sridhar]# sudo apt-get update VARNISH_ADMIN_LISTEN_PORT=6082 VARNISH_SECRET_FILE=/etc/varnish/secret Installing Varnish VARNISH_MIN_THREADS=50 Once the repositories are enabled, we can install Varnish: VARNISH_MAX_THREADS=1000 VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin [root@bookingwire sridhar]# yum -y install varnish VARNISH_STORAGE_SIZE=1G VARNISH_STORAGE=”malloc,${VARNISH_STORAGE_SIZE}” On Ubuntu, you should run the following command: VARNISH_TTL=120 DAEMON_OPTS=”-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_

68 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 69 How To Admin

PORT} \ of the servers fails, then all requests should be routed to the -f ${VARNISH_VCL_CONF} \ healthy server. To do this, add the following to your default. -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ vcl file: ADMIN_LISTEN_PORT} \ -t ${VARNISH_TTL} \ backend bw1 { .host = “146.185.129.131”; -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_ .probe = { .url = “/google0ccdbf1e9571f6ef. THREADS},${VARNISH_THREAD_TIMEOUT} \ html”; -u varnish -g varnish \ .interval = 5s; -p thread_pool_add_delay=2 \ .timeout = 1s; -p thread_pools=2 \ .window = 5; -p thread_pool_min=400 \ .threshold = 3; }} -p thread_pool_max=4000 \ backend bw2 { .host = “37.139.24.12”; -p session_linger=50 \ .probe = { .url = “/google0ccdbf1e9571f6ef. -p sess_workspace=262144 \ html”; -S ${VARNISH_SECRET_FILE} \ .interval = 5s; -s ${VARNISH_STORAGE}” .timeout = 1s; .window = 5; The first line when substituted with the variables will read -a .threshold = 3; }} :80,:443 and instruct Varnish to serve all requests made on Ports backend bw1ssl { .host = “146.185.129.131”; 80 and 443. We want Varnish to serve all http and https requests. .port = “443”; To set the thread pools, first determine the number .probe = { .url = “/google0ccdbf1e9571f6ef. of CPU cores that your VPS uses and then update the html”; directives. .interval = 5s; .timeout = 1s; [root@bookingwire sridhar]# grep processor /proc/cpuinfo .window = 5; processor : 0 .threshold = 3; }} processor : 1 backend bw2ssl { .host = “37.139.24.12”; .port = “443”; This means you have two cores. .probe = { .url = “/google0ccdbf1e9571f6ef. The formula to use is: html”; .interval = 5s; -p thread_pools= \ .timeout = 1s; -p thread_pool_min=<800 / Number of CPU cores> \ .window = 5; .threshold = 3; }} The -s ${VARNISH_STORAGE} translates to -s director default_director round-robin { malloc,1G” after variable substitution and is the most { .backend = bw1; } important directive. This allocates 1GB of RAM for { .backend = bw2; } exclusive use by Varnish. You could also specify -s file,/ } var/lib/varnish/varnish_storage.bin,10G” which tells Varnish to use the file caching mechanism on the disk and director ssl_director round-robin { that 10GB has been allocated to it. Our suggestion is that { .backend = bw1ssl; } you should use the RAM. { .backend = bw2ssl; } } Configure the default.vcl file The default.vcl file is where you will have to make most of sub vcl_recv { the configuration changes in order to tell Varnish about your if (server.port == 443) { Web servers, assets that shouldn’t be cached, etc. Open the set req.backend = ssl_director; default.vcl file in your favourite editor: } else { [root@bookingwire sridhar]# nano /etc/varnish/default.vcl set req.backend = default_director; } Since we expect to have two NGINX servers running } our application, we want Varnish to distribute the http requests between these two servers. If, for any reason, one You might have noticed that we have used public IP

68 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 69 Admin How To addresses since we had not enabled private networking If you don’t handle this, Varnish will cache the same page within our servers. You should define the ‘backends’ —one once each, for each type of encoding, thus wasting server each for the type of traffic you want to handle. Hence, we resources. In our case, it would gobble up memory. So add the have one set to handle http requests and another to handle following commands to the vcl_recv to have Varnish cache the https requests. the content only once: It’s a good practice to perform a health check to see if the NGINX Web servers are up. In our case, we kept it if (req.http.Accept-Encoding) { simple by checking if the Google webmaster file was present if (req.http.Accept-Encoding ~ “gzip”) { in the document root. If it isn’t present, then Varnish will not # If the browser supports it, we’ll use gzip. include the Web server in the round robin league and won’t set req.http.Accept-Encoding = “gzip”; redirect traffic to it. } else if (req.http.Accept-Encoding ~ “deflate”) { .probe = { .url = “/google0ccdbf1e9571f6ef.html”; # Next, try deflate if it is supported. set req.http.Accept-Encoding = “deflate”; The above command checks the existence of this file at } each backend. You can use this to take an NGINX server out else { intentionally either to update the version of the application or # Unknown algorithm. Remove it and send unencoded. to run scheduled maintenance checks. All you have to do is to unset req.http.Accept-Encoding; rename this file so that the check fails! } In spite of our best efforts to keep our servers sterile, } there are a number of reasons that can cause a server to go down. Two weeks back, we had one of our servers go Now, restart Varnish. down, taking more than a dozen sites with it because the master boot record of Centos was corrupted. In such cases, [root@bookingwire sridhar]# service varnish restart Varnish can handle the incoming requests even if your Web server is down. The NGINX Web server sets an expires header (HTTP 1.0) and the max-age (HTTP 1.1) for each Additional configuration for content management page that it serves. If set, the max-age takes precedence systems, especially Drupal over the expires header. Varnish is designed to request A CMS like Drupal throws up additional challenges when the backend Web servers for new content every time the configuring the VCL file. We’ll need to include additional content in its cache goes stale. However, in a scenario directives to handle the various quirks. You can modify the like the one we faced, it’s impossible for Varnish to obtain directives below to suit the CMS that you are using. When fresh content. In this case, setting the ‘Grace’ in the using CMSs like Drupal if there are files that you don’t want configuration file allows Varnish to serve content (stale) cached for some reason, add the following commands to your even if the Web server is down. To have Varnish serve the default.vcl file in the vcl_recv section: (stale) content, add the following lines to your default.vcl: if (req.url ~ “^/status\.php$” || sub vcl_recv { req.url ~ “^/update\.php$” || set req.grace = 6h; req.url ~ “^/ooyala/ping$” || } req.url ~ “^/admin/build/features” || req.url ~ “^/info/.*$” || sub vcl_fetch { req.url ~ “^/flag/.*$” || set beresp.grace = 6h; req.url ~ “^.*/ajax/.*$” || } req.url ~ “^.*/ahah/.*$”) { return (pass); if (!req.backend.healthy) { } unset req.http.Cookie; } Varnish sends the length of the content (see the Varnish log output above) so that browsers can display The last segment tells Varnish to strip all cookies for an the progress bar. However, in some cases when Varnish authenticated user and serve an anonymous version of the is unable to tell the browser the specified content-length page if all the NGINX backends are down. (like streaming audio) you will have to pass the request Most browsers support encoding but report it differently. directly to the Web server. To do this, add the following NGINX sets the encoding as Vary: Cookie, Accept-Encoding. command to your default.vcl:

70 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 71 How To Admin

if (req.url ~ “^/content/music/$”) { you should track down the cookie and update the regex return (pipe); above to strip it. } Once you have done that, head to /admin/config/ development/performance, enable the Page Cache setting Drupal has certain files that shouldn’t be accessible to and set a non-zero time for ‘Expiration of cached pages’. the outside world, e.g., Cron.php or Install.php. However, Then update the settings.php with the following snippet you should be able to access these files from a set of IPs by replacing the IP address with that of your machine running that your development team uses. At the top of default.vcl Varnish. include the following by replacing the IP address block with that of your own: $conf[‘reverse_proxy’] = TRUE; $conf[‘reverse_proxy_addresses’] = array(‘37.139.8.42’); acl internal { $conf[‘page_cache_invoke_hooks’] = FALSE; “192.168.1.38”/46; $conf[‘cache’] = 1; } $conf[‘cache_lifetime’] = 0; $conf[‘page_cache_maximum_age’] = 21600; Now to prevent the outside world from accessing these pages we’ll throw an error. So inside of the vcl_recv function You can install the Drupal varnish module (http://www. include the following: drupal.org/project/varnish), which provides better integration with Varnish and include the following lines in your settings.php: if (req.url ~ “^/(cron|install)\.php$” && !client.ip ~ internal) { $conf[‘cache_backends’] = array(‘sites/all/modules/varnish/ error 404 “Page not found.”; varnish.cache.inc’); } $conf[‘cache_class_cache_page’] = ‘VarnishCache’;

If you prefer to redirect to an error page, then use this Checking if Varnish is running and instead: serving requests Instead of logging to a normal log file, Varnish logs to a if (req.url ~ “^/(cron|install)\.php$” && !client.ip ~ shared memory segment. Run varnishlog from the command internal) { line, access your IP address/ URL from the browser and set req.url = “/404”; view the Varnish messages. It is not uncommon to see a ‘503 } service unavailable’ message. This means that Varnish is unable to connect to NGINX. In which case, you will see an Our approach is to cache all assets like images, JavaScript error line in the log (only the relevant portion of the log is and CSS for both anonymous and authenticated users. So reproduced for clarity). include this snippet inside vcl_recv to unset the cookie set by Drupal for these assets: [root@bookingwire sridhar]# Varnishlog

if (req.url ~ “(?i)\.(png|gif|jpeg|jpg|ico|swf|css|js|html| 12 StatSess c 122.164.232.107 34869 0 1 0 0 0 0 0 0 htm)(\?[a-z0-9]+)?$”) { 12 SessionOpen c 122.164.232.107 34870 :80 unset req.http.Cookie; 12 ReqStart c 122.164.232.107 34870 1343640981 } 12 RxRequest c GET 12 RxURL c / Drupal throws up a challenge especially when 12 RxProtocol c HTTP/1.1 you have enabled several contributed modules. These 12 RxHeader c Host: 37.139.8.42 modules set cookies, thus preventing Varnish from 12 RxHeader c User-Agent: Mozilla/5.0 (X11; Ubuntu; caching assets. Google analytics, a very popular module, Linux i686; rv:27.0) Gecko/20100101 Firefox/27.0 sets a cookie. To remove this, include the following in 12 RxHeader c Accept: text/html,application/ your default.vcl: xhtml+xml,application/xml;q=0.9,*/*;q=0.8 12 RxHeader c Accept-Language: en-US,en;q=0.5 set req.http.Cookie = regsuball(req.http.Cookie, “(^|;\s*) 12 RxHeader c Accept-Encoding: gzip, deflate (__[a-z]+|has_js)=[^;]* 12 RxHeader c Referer: http://37.139.8.42/ 12 RxHeader c Cookie: __zlcmid=OAdeVVXMB32GuW If there are other modules that set JavaScript cookies, 12 RxHeader c Connection: keep-alive then Varnish will cease to cache those pages; in which case, 12 FetchError c no backend connection

70 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 71 Admin How To

Check if Varnish is serving pages Visit http://www.isvarnishworking.com/, provide your URL/ IP address and you should see your Gold Star! (See Figure 3.) If you don’t, but instead see other messages, it means that Varnish is running but not caching. Then you should look at your code and ensure that it sends the appropriate headers. If you are using a content management system, particularly Drupal, you can check the additional parameters in the VCL file and set them correctly. You have to enable caching in the performance page. Running the tests Running Pingdom tests showed improved response times of 2.14 seconds. If you noticed, there was an improvement in the response time in spite of having the payload of the page increasing from 2.9MB to 4.1MB. If you are wondering why it increased, remember, we switched the site to a new theme. Apache Bench reports better figures at 744.722 ms. Configuring client IP forwarding Figure 3: Varnish status result Check the IP address for each request in the access logs of your Web servers. For NGINX, the access logs are available at /var/log/nginx and for Apache, they are available at /var/ Full Page Test DNS Health Ping and Traccroute Sign up log/httpd or /var/log/apache2, depending on whether you are Pingdom Website Speed Test running Centos or Ubuntu. Enter a URL to test the load time of that page, analyze it and find bottlenecks It’s not surprising to see the same IP address (of the Test Now Varnish machine) for each request. Such a configuration will throw all Web analytics out of gear. However, there is a way 37.139.8.42 out. If you run NGINX, try out the following procedure. Determine the NGINX configuration that you currently run by Your website is faster than 68% of all tested websites executing the command below in your command line: Download Har Tweet Post to Timeline Email

Figure 4: Pingdom test result after configuring Varnish [root@bookingwire sridhar]# nginx -V

12 VCL_call c error Look for the –with-http_realip_module. If this is 12 TxProtocol c HTTP/1.1 available, add the following to your NGINX configuration file 12 TxStatus c 503 in the http section. Remember to replace the IP address with 12 TxResponse c Service Unavailable that of your Varnish machine. If Varnish and NGINX run on 12 TxHeader c Server: Varnish the same machine, do not make any changes. 12 TxHeader c Retry-After: 0 12 TxHeader c Content-Type: text/html; charset=utf-8 set_real_ip_from 127.0.0.1; 12 TxHeader c Content-Length: 686 real_ip_header X-Forwarded-For; 12 TxHeader c Date: Thu, 03 Apr 2014 09:08:16 GMT 12 TxHeader c X-Varnish: 1343640981 Restart NGINX and check the logs once again. You will 12 TxHeader c Age: 0 see the client IP addresses. 12 TxHeader c Via: 1.1 varnish If you are using Drupal then include the following line in 12 TxHeader c Connection: close settings.php: 12 Length c 686 $conf[‘reverse_proxy_header’] = ‘HTTP_X_FORWARDED_FOR’; Resolve the error and you should have Varnish running. But that isn’t enough—we should check if it’s caching the Other Varnish tools pages. Fortunately, the folks at the following URL have Varnish includes several tools to help you as an administrator. made it simple for us. varnishstat -1 -f n_lru_nuked: This shows the number of

72 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 73 How To Admin

If you have the Remi repo enabled and the Varnish cache repo enabled, install them by specifying the defined repository.

Yum install varnish –enablerepo=epel Yum install varnish –enablerepo=varnish-3.0

Our experience has been that Varnish reduces the number of requests sent to the NGINX server by caching assets, thus improving page response times. It also acts as a failover mechanism if the Web server fails. Figure 5: Apache Bench result after configuring Varnish We had over 55 JavaScript files (two as part of the theme and the others as part of the modules) in Drupal and we objects nuked from the cache. aggregated JavaScript by setting the flag in the Performance Varnishtop: This reads the logs and displays the most page. We found a 50 per cent drop in the number of requests; frequently accessed URLs. With a number of optional flags, it however, we found that some of the JavaScript files were not can display a lot more information. loaded on a few pages and had to disable the aggregation. Varnishhist: Reads the shared memory logs, and displays This is something we are investigating. Our recommendation a histogram showing the distribution of the last N requests on is not to choose the aggregate JavaScript files in your Drupal the basis of their processing. CMS. Instead, use the Varnish module (https://drupal.org/ Varnishadm: A command line utility for Varnish. project/varnish). Varnishstat: Displays the statistics. The module allows you to set long object lifetimes (Drupal doesn’t set it beyond 24 hours), and use Drupal’s Dealing with SSL: SSL-offloader, SSL- existing cache expiration logic to dynamically purge accelerator and SSL-terminator Varnish when things change. SSL termination is probably the most misunderstood term You can scale this architecture to handle higher loads in the whole mix. The mechanism of SSL termination is either vertically or horizontally. For vertical scaling, resize employed in situations where the Web traffic is heavy. your VPS to include additional memory and make that Administrators usually have a proxy to handle SSL available to Varnish using the -s directive. requests before they hit Varnish. The SSL requests are To scale horizontally, i.e., to distribute the requests between decrypted and the unencrypted requests are passed to several machines, you could add additional Web servers and the Web servers. This is employed to reduce the load update the round robin directives in the VCL file. on the Web servers by moving the decryption and other You can take it a bit further by including HAProxy cryptographic processing upstream. right upstream and have HAProxy route requests to Since Varnish by itself does not process or understand Varnish, which then serves the content or passes it SSL, administrators employ additional mechanisms to downstream to NGINX. terminate SSL requests before they reach Varnish. Pound To remove a Web server from the round robin league, (http://www.apsis.ch/pound) and Stud (https://github. you can improve upon the example that we have mentioned com/bumptech/stud) are reverse proxies that handle SSL by writing a small PHP snippet to automatically shut down termination. Stunnel (https://www.stunnel.org/) is a program or exit() if some checks fail. that acts as a wrapper that can be deployed in front of Varnish. Alternatively, you could also use another NGINX in front of Varnish to terminate SSL. References However, in our case, since only the sign-in pages [1] https://www.varnish-cache.org/ required SSL connections, we let Varnish pass all SSL [2] https://www.varnish-software.com/static/book/index.html requests to our backend Web server. [3] http://www.lullabot.com/blog/article/configuring-varnish- high-availability-multiple-web-servers Additional repositories There are other repositories from where you can get the latest By: Sridhar Pandurangiah release of Varnish: The author is the co-founder and director of Sastra Technologies, a start-up engaged in providing EDI solutions on wget repo.varnish-cache.org/redhat/varnish-3.0/el6/noarch/ the cloud. He can be contacted at sridhar@sastratechnologies. in /[email protected]. He maintains a technical blog at varnish-release/varnish-release-3.0-1.el6.noarch.rpm sridharpandu.wordpress.com rpm –nosignature -i varnish-release-3.0-1.el6.noarch.rpm

72 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 73 Admin How To Use Wireshark to Detect ARP Spoofing The first two articles in the series on Wireshark, which appeared in the July and August 2014 issues of OSFY, covered a few simple protocols and various methods to capture traffic in a ‘switched’ environment. This article describes an attack called ARP spoofing and explains how you could use Wireshark to capture it.

magine an old Hindi movie where the villain and his consequences of the lie. It thus becomes vitally important subordinate are conversing over the telephone, and the for the state to use all of its powers to repress dissent, for the Ihero intercepts this call to listen in on their conversation truth is the mortal enemy of the lie, and thus by extension, the – a perfect ‘man in the middle’ (MITM) scenario. Now truth is the greatest enemy of the state.” extend this to the network, where an attacker intercepts So let us interpret this quote by a leader of the communication between two computers. infamous Nazi regime from the perspective of the ARP Here are two possibilities with respect to what an attacker protocol: If you repeatedly tell a device who a particular can do to intercepted traffic: MAC address belongs to, the device will eventually 1. Passive attacks (also called eavesdropping or only believe you, even if this is not true. Further, the device listening to the traffic):These can reveal sensitive will remember this MAC address only as long as you keep information such as clear text (unencrypted) login IDs and telling the device about it. Thus, not securing an ARP passwords. cache is dangerous to network security. 2. Active attacks: These modify the traffic and can be used for various types of attacks such as replay, spoofing, etc. Note: From the network security professional’s An MITM attack can be launched against cryptographic view, it becomes absolutely necessary to monitor ARP systems, networks, etc. In this article, we will limit our traffic continuously and limit it to below a threshold. Many discussions to MITM attacks that use ARP spoofing. managed switches and routers can be configured to monitor and control ARP traffic below a threshold. ARP spoofing Joseph Goebbels, Nazi Germany’s minister for propaganda, An MITM attack is easy to understand using this famously said, “If you tell a lie big enough and keep repeating context. Attackers trying to listen to traffic between any two it, people will eventually come to believe it. The lie can devices, say a victim’s computer system and a router, will be maintained only for such time as the state can shield launch an ARP spoofing attack by sending unsolicited (what the people from the political, economic and/or military this means is an ARP reply packet sent out without receiving

74 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 75 How To Admin

Figure 3: Wireshark capture on the attacker’s PC–ARP packets

Figure 1: Ettercap menus

Figure 4: Wireshark capture on the attacker’s PC–sniffed packets sniffed from the victim’s PC and router

The tool has command line options, but its GUI is easier and can be started by using:

ettercap -G

Launch the MITM ARP spoofing attack by using Ettercap menus (Figure 1) in the following sequence (words in italics indicate Ettercap menus): ƒƒ Sniff is unified sniffing and selects the interface to be sniffed (for example, eth0 for a wired network). Figure 2: Successful ARP poisoning ƒƒ Hosts scans for hosts. It scans for all active IP addresses in the eth0 network. an ARP request) ARP reply packets with the following ƒƒ The hosts list displays the list of scanned hosts. source addresses: ƒƒ The required hosts are added to Target1 and Target2. ƒƒ Towards the victim’s computer system: Router IP address An ARP spoofing attack will be performed so as to read and attacker's PC MAC address; traffic between all hosts selected under Target1 and ƒƒ Towards the router: Victim’s computer IP address and Target2. attacker’s PC MAC address. ƒƒ Targets gives the current targets. It verifies selection of After receiving such packets continuously, due to ARP the correct targets. protocol characteristics, the ARP cache of the router and the ƒƒ MITM – ARP poisoning: ‘Sniff remote connections’ will victim’s PC will be poisoned as follows: start the attack. ƒƒ Router: The MAC address of the attacker’s PC registered The success of the attack can be confirmed as follows: against the IP address of the victim; ƒƒ In the router, check ARP cache (for a CISCO router, the ƒƒ Victim’s PC: The MAC address of the attacker’s PC command is show ip arp). registered against the IP address of the router. ƒƒ In the victim PC, use the ARP -a command. Figure 2 gives the output of the command before and after a The Ettercap tool successful ARP spoofing attack. ARP spoofing is the most common type of MITM attack, and The attacker PC captures traffic using Wireshark to can be launched using the Ettercap tool available under Linux check unsolicited ARP replies. Once the attack is successful, (http://ettercap.github.io/ettercap/downloads.html). A few the traffic between two targets will also be captured. Be sites claim to have Windows executables. I have never tested careful–if traffic from the victim’s PC contains clear text these, though. You may install the tool on any Linux distro, or authentication packets, the credentials could be revealed. use distros such as Kali Linux, which has it bundled. Note that Wireshark gives information such as ‘Duplicate

74 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 75 Admin How To use of IP is detected’ under the ‘Info’ column once the attack is successful.

Here is how the actual packet No broadcast and no Multicast No ARP travels and is captured after a IP only successful ARP poisoning attack: IP address 192.168.0.1 IPX only ƒƒ When the packet from the TCP only victim PC starts for the router, UDP only at Layer 2, the poisoned MAC address of the attacker (instead of the original router MAC) is inserted as the target MAC; thus the packet reaches the attacker’s PC. ƒƒ The attacker sees this packet and forwards the same to the router with the correct MAC address. Figure 5: Wireshark’s capture filter ƒƒ The reply from the router is logically sent towards the spoofed destination MAC address of the attacker’s system (rather than the victim’s PC). It is captured and forwarded by the attacker to the victim’s PC. Packets captured using the test scenarios described in ƒƒ In between, the sniffer software, Wireshark, which is this series of articles are capable of revealing sensitive running on the attacker’s PC, reads this traffic. information such as login names and passwords. Using Here are various ways to prevent ARP spoof attacks: ARP spoofing, in particular, will disturb the network ƒƒ Monitor ‘arpwatch’ logs on Linux temporarily. Make sure to use these techniques only in ƒƒ Use static ARP commands on Windows and Ubuntu a test environment. If at all you wish to use them in a as follows: live environment, do not forget to avail explicit written • Windows: arp-s DeviceIP DeviceMAC permission before doing so. • Ubuntu: arp -i eth0 -s DeviceIP DeviceMAC ƒƒ Control ARP packets on managed switches the previous article. But, in a busy network, capturing Can MITM ARP spoofing be put to fruitful use? all traffic and using display filters to see only the desired Definitely! Consider capturing packets from a system traffic may require a lot of effort. Wireshark’s capture suspected of malware (virus) infection in a switched filters provide a way out. environment. There are two ways to do this—use a In the beginning, before selecting the interface, you can wiretap or MITM ARP spoofing. Sometimes, you may click on Capture Options and use capture filters to capture not have a wiretap handy or may not want the system only the desired traffic. Click on the Capture filter button to to go offline even for the time required to connect the see various filters, such as ARP, No ARP, TCP only, UDP only, wiretap. Here, MITM ARP spoofing will definitely traffic from specific IP addresses, and so on. Select the desired serve the purpose. filter and Wireshark will capture only the defined traffic. For example, MITM ARP spoofing can be captured Note: This attack is specifically targeted towards using the ARP filter from Capture filters instead of ‘Display OSI Layer 2–a data link layer; thus, it can be executed filtering’ the entire captured traffic. only from within your network. Be assured, this attack Keep a watch on this column for exciting Wireshark cannot be used sitting outside the local network to features! sniff packets between your computer and your bank’s Web server – the attacker must be within the local network. By: Rajesh Deodhar The author has been an IS auditor and network security Before we conclude, let us understand an important consultant-trainer for the last two decades. He is a BE in Industrial Wireshark feature called capture filters. Electronics, and holds CISA, CISSP, CCNA and DCL certifications. Please feel free to contact him at [email protected] We did go through the basics of display filters in

76 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | PB Insight Admin

Make Your Own PBX with Asterisk

This article, the first of a multi-part series, familiarises readers with Asterisk, which is a software implementation of a private branch exchange (PBX).

sterisk is a revolutionary open source platform his requirements. Later, he published the software as open started by Mark Spencer, and has shaken up the source and a lot of others joined the community to further Atelecom world. This series is meant to familiarise develop the software. The rest is history. you with it, and educate you enough to be a part of it in order to enjoy its many benefits. The statistics If you are a technology freak, you will be able to make Today, Asterisk claims to have 2 million downloads your own PBX for your office or home after going through every year, and is running on over 1 million servers, this series. As a middle level manager, you will be able to with 1.3 million new endpoints created annually. A guide a techie to do the job, while senior level managers with 2012 statistic by Eastern Management claims that 18 a good appreciation of the technology and minimal costs per cent of all PBX lines in North America are open involved would be in a position to direct somebody to set up source-based and the majority of them are on Asterisk. an Asterisk PBX. If you are an entrepreneur, you can adopt Indian companies have also started adopting Asterisk one of the many business models with Asterisk. As you will since a few years. The initial thrust was for international see, it is worthwhile to at least evaluate the option. call centres. A large majority of the smaller call centres (50-100 seater) use ‘Vicidial', another open source History application based on Asterisk. IP PBX penetration in the In 1999, Mark Spencer of Digium fame started a Linux Indian market is not very high due to certain regulatory technical support company with US$ 4000. Initially, he had misinterpretations. Anyhow, this unclear environment is to be very frugal; so buying one of those expensive PBXs gradually getting clarity, and very soon, we will see an was unthinkable. Instead, he started programming a PBX for astronomic growth of Asterisk in the Indian market.

PB | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 77 Admin Insight

The call centre boom also led anywhere in the office—participate in a conference, visit to the development of the Asterisk a colleague, doctors can visit their in-patients—and yet ecosystem comprising Asterisk- receive calls as if they were seated at their desks. based product companies, software ƒƒ External extensions: The employees could be at home, supporters, hardware resellers, etc, at a friend's house, or even out making a purchase, and across India. This presents a huge still receive the same calls, as if at their desks. opportunity for entrepreneurs. ƒƒ Increased call accountability: Calls can be recorded and monitored for quality or security purposes at the PBX. Some terminology ƒƒ Lower telephone costs: The volume of calls passing Before starting, I would like to Mark Spencer, through the PBX makes it possible to negotiate with the introduce some basic terms for the founder of Asterisk service provider for better rates. benefit of readers who are novices in this field. Let us start The advantages that a roaming extension brings with the PBX or private branch exchange, which is the are many, which we will explore in more detail in heart of all corporate communication. All the telephones subsequent editions. seen in an office environment are connected to the PBX, Let us look into the basics of Asterisk. “Asterisk is which in turn connects you to the outside world. The like a box of Lego blocks for people who want to create internal telephones are called subscribers and the external communications applications. It includes all the building lines are called trunk lines. blocks needed to create a PBX, an IVR system, a conference The trunk lines connect the PBX to the outside world bridge and virtually any other communications app you can or the PSTN (Public Switched Telephony Network). imagine,” says an excerpt from asterisk.org. Analogue trunks (FXO–Foreign eXchange Office) are Asterisk is actually a piece of software. In very simple based on very old analogue technology, which is still and generic terms, the following are the steps required to in use in our homes and in some companies. Digital create an application based on it: trunk technology or ISDN (Integrated Services Digital 1. Procure standard hardware. Network) evolved in the 80s with mainly two types of 2. Install Linux. connections – BRI (Basic Rate Interface) for SOHO 3. Download Asterisk software. (small office/ home office) use, and PRI (Primary Rate 4. Install Asterisk. Interface) for corporate use. In India, analogue trunks 5. Configure it. are used for SOHO trunking, but BRI is no longer used 6. Procure hardware interfaces for the trunk line and at all. Anyhow, PRI is quite popular among companies. configure them. IP/SIP (Internet Protocol/Session Initiation Protocol) 7. Procure hardware for subscribers and configure them. trunking has been used by international call centres for 8. You’re then ready to make your calls. quite some time. Now, many private providers like Tata Procure a standard desktop or server hardware, based Telecom have started offering SIP trunking for domestic on Pentium, Xeon, i3, etc. RAM is an important factor, and calls also. The option of GSM trunking through a GSM could be 2GB, 4GB or 8GB. These two factors decide the gateway using SIM cards is also quite popular, due to the number of concurrent calls. Hard disk capacity of 500GB or flexibility offered in costs, prepaid options and network 1TB is mainly for space to store voice files for VoiceMail availability. or VoiceLogger. The hard disk’s speed also influences the The users connected to the PBX are called subscribers. concurrent calls. Analogue telephones (FXS – Foreign eXchange Subscriber) The next step is to choose a suitable OS—Fedora, are still very commonly used and are the cheapest. As Debian, CentOS or Ubuntu are well suited for this Asterisk is an IP PBX, we need a VoIP FXS gateway to purpose. After this, Asterisk software may be downloaded convert the IP signals to analogue signals. Asterisk supports from www.asterisk.org/downloads/. Either the newest LTS IP telephones, mainly using SIP. (Long Term Support) release or the latest standard version Nowadays, Wi-Fi clients are available even for can be downloaded. LTS versions are released once in smartphones, which enable the latter to work like extensions. four years. They are more stable, but have fewer features These clients bring in a revolutionary transformation to the than the standard version, which is released once a year. telephony landscape–analogous to paperless offices and Once the software is downloaded, the installation may be telephone-less desks. The same smartphone used to make carried out as per the instructions provided. We'll go into calls over GSM networks becomes a dual-purpose phone– the details of the installation in later sessions. also working like a desk extension. Just for a minute, consider The download page also offers the option to download the limitless possibilities enabled by this new transformed AsteriskNow, which is an ISO image of Linux, Asterisk and extension phone. FreePBX GUI. If you prefer a very quick and simple installation ƒƒ Extension roaming: Employees can roam about without much flexibility, you may choose this variant.

78 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 79 Insight Admin

After the installation, one needs to create the trunks, users There are also lots of applications based on Asterisk and set up some more features to be able to start using the like Vicidial, which is a call-centre suite for inbound system. The administrators can make these configurations and outbound dialling. For the latter, one can configure directly into the dial plan, or there are GUIs like FreePBX, campaigns with lists of numbers, dial these numbers which enable easy administration. in predictive dialling mode and connect to the agents. Depending on the type of trunk chosen, we need to Similarly, inbound dialling can also be configured with procure hardware. If we are connecting a normal analogue multiple agents, and the calls routed based on multiple line, an FXO card with one port needs to be procured, in PCI criteria like the region, skills, etc. or PCIe format, depending on the slots available on the server. Asterisk also easily integrates with multiple enterprise After inserting the card, it has to be configured. Similarly, if applications (like CRM and ERP) over CTI (computer you have to connect analogue phones, you need to procure telephony interfaces) like TAPI (Telephony API) or by using FXS gateways. IP phones can be directly connected to the simple URL integration. system over the LAN. O'Reilly has a book titled ‘Asterisk: The future of Exploring the PBX further, you will be astonished telephony', which can be downloaded. I would like to take by the power of Asterisk. It comes with a built in voice you through the power of Asterisk in subsequent issues, so logger, which can be customised to record either all calls that you and your network can benefit from this remarkable or those from selective people. In most proprietary PBXs, product, which is expected to change the telephony landscape this would have been an additional component. Asterisk not of the future. only provides a voice mail box, but also has the option to convert the voice mail to an attachment that can be sent to you as an email. The Asterisk IVR is very powerful; it has By: Devasia Kurian multiple levels, digit collection, database and Web-service The author is the founder and CEO of *astTECS. integration, and speech recognition.

Please share your feedback/ thoughts/ views via email at [email protected]

78 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 79 Open Gurus How To How to Make Your USB Boot with Multiple ISOs

This DIY article is for systems admins and software hobbyists, and teaches them how to create a bootable USB that is loaded with multiple ISOs.

ystems administrators and other Linux enthusiasts use Fat32 (0x0c). You can choose ext2/ext3 file systems also, but multiple CDs or DVDs to boot and install operating they will not load some OSs. So, Fat32 is the best choice for Ssystems on their PCs. But it is somewhat difficult and most of the ISOs. costly to maintain one CD or DVD for each OS (ISO image Now download the grub4dos-0.4.5c (not grub4dos- file) and to carry around all these optical disks; so, let’s look 0.4.6a) from https://code.google.com/p/grub4dos-chenall/ at the alternative—a multi-boot USB. downloads/list and extract it on the desktop. The Internet provides so many ways (in Windows and Next, install the grub4dos on the MBR with a zero in Linux) to convert a USB drive into a bootable USB. In second time-out on your USB stick, by typing the following real time, one can create a bootable USB that contains a command at the terminal: single OS. So, if you want to change the OS (ISO image), you have to format the USB. To avoid formatting the USB sudo ~/Desktop/grub4dos-0.4.5c/bootlace.com - -time-out =0 / each time the ISO is changed, use Easy2Boot. In my case, dev/sdb the RMPrepUSB website saved me from unnecessarily formatting the USB drive by introducing the Easy2Boot option. Easy2Boot is open source - it consists of plain text Note: You can change the path to your grub4dos folder. batch files and open source grub4dos utilities. It has no sdb is your USB and can be checked by the df command in a proprietary software. terminal or by using the gparted or disk utility tools. Making the USB drive bootable Copying Easy2Boot files to USB To make your USB bootable, just connect it to your Linux Your pen drive is ready to boot, but we need menu files, system. Open the disk utility or gparted tool and format it as which are necessary to detect the .ISO files in your USB.

80 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 81 How To Open Gurus

defragfs-1.1.1.gz file from http://defragfs.sourceforge.net/ download.html and extract it to the desktop. Now run the following command at the terminal:

sudo umount /dev/sdb1

sdb1 is the partition on my USB which has the E2B files.

sudo mkdir ~/Desktop/usb && sudo mount /dev/sdb1 ~/Desktop/ usb sudo perl ~/Desktop/defragfs ~/Desktop/usb -f Figure 1: Folders for different OSs That’s it. Your USB drive is ready with a number of ISO files to boot on any system. Just run the defragfs command every time you modify (add or remove) the ISO files in the USB to make all the files in the drive contiguous. Using the QEMU emulator for testing After completing the final stage, test how well your USB boots with lots of .ISO load on it, using the QEMU tool. Alternatively, you can choose any of the virtualisation tools like Virtual Box or VMware. We used QEMU (it is easy but somewhat slow) in our Linux machine by typing the following command at the terminal: Figure 2: Easy2Boot OS selection menu

sudo qemu –m 512M /dev/sdb

Note: The loading of every .ISO file in the corresponding folder is based only on the .mnu file for that .ISO. So, by creating your own .mnu file you can add your own flavour to the USB menu list. For further details and help regarding .mnu file creation, just visit http://www.rmprepusb.com/tutorials.

Your USB will boot and the Easy2Boot OS selection menu will appear. Choose the OS you want, which is placed under Figure 3: Ubuntu boot menu the corresponding folder. You can use your USB in real time, and can add or remove the .ISOs in the corresponding The menu (.mnu) files and other boot-related files can folders simply by copy-pasting. You can use the same USB be downloaded from the Easy2boot website. Extract for copying documents and other files by making all the files the Easy2Boot file to your USB drive and you can that belong to Easy2Boot contiguous. observe the different folders that are related to different operating systems and applications. Now, just place the References corresponding .ISO file in the corresponding folder. For [1] http://www.rmprepusb.com/tutorials example, all the Linux-related .ISO files should be placed [2] https://code.google.com/p/grub4dos-chenall/downloads/list in the Linux folder, all the backup-Linux related files [3] http://www.easy2boot.com/download/ should be placed in the corresponding folder, utilities [4] http://defragfs.sourceforge.net/download.html should be placed in the utilities folder, and so on. Your USB drive is now ready to be loaded with any By: Gaali Mahesh and Nagaram Suresh Kumar (almost all) Linux image files, backup utilities and some The authors are assistant professors at VNITSW (Vignan’s Nirula other Windows related .ISOs without formatting it. After Institute of Technology and Science for Women, Andhra Pradesh). placing your required image files, either installation They blog at surkur.blogspot.in, where they share some tech tricks ISOs or live ISOs, you need to defragment the folders in and their practical experiences with open source. You can reach them at [email protected] and [email protected]. the USB drive. To defrag your USB drive, download the

80 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 81 Open Gurus Let's Try How to Cross Compile the Linux Kernel with Device Tree Support This article is intended for those who would like to experiment with the many embedded boards in the market but do not have access to them for one reason or the other. With the QEMU emulator, DIY enthusiasts can experiment to their heart’s content.

ou may have heard of the many embedded target to working with most target boards, you can apply these boards available today, like the BeagleBoard, techniques on other boards too. YRaspberry Pi, BeagleBone, PandaBoard, Cubieboard, Wandboard, etc. But once you decide to start development for Device tree them, the right hardware with all the peripherals may not be Flattened Device Tree (FDT) is a data structure that describes available. The solution to starting development on embedded hardware initiatives from open firmware. The device tree Linux for ARM is by emulating hardware with QEMU, which perspective kernel no longer contains the hardware description, can be done easily without the need for any hardware. There which is located in a separate binary called the device tree are no risks involved, too. blob (dtb) file. So, one compiled kernel can support various QEMU is an open source emulator that can emulate hardware configurations within a wider architecture family. the execution of a whole machine with a full-fledged OS For example, the same kernel built for the OMAP family can running. QEMU supports various architectures, CPUs and work with various targets like the BeagleBoard, BeagleBone, target boards. To start with, let’s emulate the Versatile Express PandaBoard, etc, with dtb files. The boot loader should be Board as a reference, since it is simple and well supported by customised to support this as two binaries-kernel image and recent kernel versions. This board comes with the Cortex-A9 the dtb file - are to be loaded in memory. The boot loader (ARMv7) based CPU. passes hardware descriptions to the kernel in the form of dtb In this article, I would like to mention the process of files. Recent kernel versions come with a built-in device tree cross compiling the Linux kernel for ARM architecture compiler, which can generate all dtb files related to the selected with device tree support. It is focused on covering the architecture family from device tree source (dts) files. Using the entire process of working—from boot loader to file system device tree for ARM has become mandatory for all new SOCs, with SD card support. As this process is almost similar with support from recent kernel versions.

82 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 83 Let's Try Open Gurus

Building QEMU from sources You may obtain pre-built QEMU binaries from your distro repositories or build QEMU from sources, as follows. Download the recent stable version of QEMU, say qemu- 2.0..bz2, extract and build it:

tar -zxvf qemu-2.0.tar.bz2 cd qemu-2.0 ./configure --target-list=arm-softmmu, arm-linux-user --prefix=/opt/qemu-arm make make install Figure 1: Kernel configuration–main menu

You will observe commands like qemu-arm, qemu- Building mkimage system-arm, qemu-img under /opt/qemu-arm/bin. The mkimage command is used to create images for use with Among these, qemu-system-arm is useful to emulate the the u-boot boot loader. whole system with OS support. Here, we'll use this tool to transform the kernel image to be used with u-boot. Since this tool is available only through Preparing an image for the SD card u-boot, we need to go for a quick build of this boot loader QEMU can emulate an image file as storage media in the to generate mkimage. Download a recent stable version of form of the SD card, flash memory, hard disk or CD drive. u-boot (tested on u-boot-2014.04.tar.bz2) from ftp.denx.de/ Let’s create an image file using qemu-img in raw format and pub/u-boot: create a FAT file system in that, as follows. This image file acts like a physical SD card for the actual target board: tar -jxvf u-boot-2014.04.tar.bz2 cd u-boot-2014.04 qemu-img create -f raw sdcard.img 128M make tools-only #optionally you may create partition table in this image #using tools like sfdisk, parted Now, copy mkimage from the tools directory to any mkfs.vfat sdcard.img directory under the standard path (like /usr/local/bin) as a #mount this image under some directory and copy required files super user, or set the path to the tools directory each time, mkdir /mnt/sdcard before the kernel build. mount -o loop,rw,sync sdcard.img /mnt/sdcard Building the Linux kernel Setting up the toolchain Download the most recent stable version of the kernel source We need a toolchain, which is a collection of various cross from kernel.org (tested with linux-3.14.10.tar.xz): development tools to build components for the target platform. Getting a toolchain for your Linux kernel is tar -xvf linux-3.14.10.tar.gz always tricky, so until you are comfortable with the process cd linux-3.14.10 please use tested versions only. I have tested with pre-built make mrproper #clean all built files and toolchains from the Linaro organisation, which can be got configuration files from the following link http://releases.linaro.org/14.0.4/ make ARCH=arm vexpress_defconfig #default configuration for components/toolchain/binaries/gcc-linaro-arm-linux- given board gnueabihf-4.8-2014.04_linux.tar.xz or any latest stable make ARCH=arm menuconfig #customize the configuration version. Next, set the path for cross tools under this toolchain, as follows: Then, to customise kernel configuration (Figure 1), follow the steps listed below: tar -xvf gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux. 1) Set a personalised string, say ‘-osfy-fdt’, as the local tar.xz -C /opt version of the kernel under general setup. export PATH=/opt/gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_ 2) Ensure that ARM EABI and old ABI compatibility are linux/bin:$PATH enabled under kernel features. 3) Under device drivers--> block devices, enable RAM disk You will notice various tools like gcc, ld, etc, under /opt/ support for initrd usage as static module, and increase gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux/bin with default size to 65536 (64MB). the prefix arm-linux-gnueabihf- You can use arrow keys to navigate between various options

82 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 83 Open Gurus Let's Try

qemu-system-arm -M vexpress-a9 -m 1024 -serial stdio \ -kernel /mnt/sdcard/zImage \ -dtb /mnt/sdcard/vexpress-v2p-ca9.dtb \ -initrd /mnt/sdcard/rootfs.img -append “root=/dev/ram0 console=ttyAMA0”

In the above command, we are treating rootfs as ‘initrd image’, which is fine when rootfs is of a small size. You can connect larger file systems in the form of a hard disk or SD card. Let’s try out rootfs through an SD card:

Figure 2: Kernel configuration–RAM disk support qemu-system-arm -M vexpress-a9 -m 1024 -serial stdio \ -kernel /mnt/sdcard/zImage \ and space bar to select among various states (blank, m or *) -dtb /mnt/sdcard/vexpress-v2p-ca9.dtb \ 4) Make sure devtmpfs is enabled under the Device Drivers -sd /mnt/sdcard/rootfs.img -append “root=/dev/mmcblk0 and Generic Driver options. console=ttyAMA0” Now, let’s go ahead with building the kernel, as follows: In case the sdcard/image file holds a valid partition table, we #generate kernel image as zImage and necessary dtb files need to refer to the individual partitions like /dev/mmcblk0p1, /dev/ make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- mmcblk0p2, etc. Since the current image file is not partitioned, we zImage dtbs can refer to it by the device file name /dev/mmcblk0. #transform zImage to use with u-boot make ARCH=arm CROSS_COMPILE=arm-linux- Building u-boot gnueabihf- uImage \ Switch back to the u-boot directory (u-boot-2014.04), build LOADADDR=0x60008000 u-boot as follows and copy it to the SD card: #copy necessary files to sdcard cp arch/arm/boot/zImage /mnt/sdcard make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- vexpress_ cp arch/arm/boot/uImage /mnt/sdcard ca9x4_config cp arch/arm/boot/dts/*.dtb /mnt/sdcard make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- #Build dynamic modules and copy to suitable destination cp u-boot /mnt/image make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- modules # you can go for a quick test of generated u-boot as follows make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- qemu-system-arm -M vexpress-a9 -kernel /mnt/sdcard/u-boot modules_install \ INSTALL_ -serial stdio MODPATH= Let’s ignore errors such as ‘u-boot couldn't locate kernel You may skip the last two steps for the moment, as the image’ or any other suitable files. given configuration steps avoid dynamic modules. All the necessary modules are configured as static. The final steps Let’s boot the system with u-boot using an image file such as Getting rootfs SD card, and make sure the QEMU PATH is not disturbed. We require a file system to work with the kernel we’ve built. Unmount the SD card image and then boot using QEMU. Download the pre-built rootfs image to test with QEMU from the following link: http://downloads.yoctoproject.org/ umount /mnt/sdcard releases/yocto/yocto-1.5.2/machines/qemu/qemuarm/core- image-minimal-qemuarm.ext3 and copy it to the SD card (/ mnt/image) by renaming it as rootfs.img for easy usage. You may obtain the rootfs image from some other repository or build it from sources using Busybox. Your first try Let’s boot this kernel image (zImage) directly without u-boot, as follows: export PATH=/opt/qemu-arm/bin:$PATH Figure 3: U-boot loading

84 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 85 Let's Try Open Gurus

fatload mmc 0:0 0x80200000 uImage fatload mmc 0:0 0x80100000 vexpress-v2p-ca9.dtb setenv bootargs 'console=ttyAMA0 root=/dev/ram0 rw initrd=0x82000000,8388608' bootm 0x80200000 - 0x80100000

Ensure a space before and after the ‘–’ symbol in the above command. Log in using ‘root’ as the username and a blank password to play around with the system. I hope this article proves useful for bootstrapping with Figure 4: Loading of kernel with FDT support embedded Linux and for teaching the concepts when there is no hardware available. qemu-system-arm -M vexpress-a9 -sd sdcard.img -m 1024 Acknowledgements -serial stdio -kernel u-boot I thank Babu Krishnamurthy, a freelance trainer for his valuable You can stop autoboot by hitting any key within the time inputs on embedded Linux and omap hardware during the course of my embedded journey. I am also grateful to C-DAC for limit and enter the following commands at the u-boot prompt the good support I’ve received. to load rootfs.img, uimage, dtb files from the SD card to suitable memory locations without overlapping. Also, set the References kernel boot parameters using setenv as shown below (here, [1] elinux.org/Qemu 0x82000000 stands for the location of the loaded rootfs image [2] Device Tree for Dummies by Thomas Petazzoni (free- and 8388608 is the size of the rootfs image). electrons.com) [3] Few inputs taken from en.wikipedia.org/wiki/Device_tree [4] mkimage man page from u-boot documentation Note: The following commands are internal to u-boot and must be entered within the u-boot prompt. By: Rajesh Sola fatls mmc 0:0 #list out partition contents The author is a faculty member of C-DAC's Advanced fatload mmc 0:0 0x82000000 rootfs.img # note down the size of Computing Training School, Pune, in the embedded systems domain. You can reach him at [email protected]. image being loaded

84 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 85 Open Gurus How To

As the Internet of Things becomes more of a reality, Contiki, an open source OS, allows DIY enthusiasts to experiment Contiki OS with connecting tiny, low-cost, low-power Connecting Microcontrollers microcontrollers to to the Internet of Things the Internet.

ontiki is an open source operating system for Step 3: Open the Virtual Machine and open the Contiki connecting tiny, low-cost, low-power microcontrollers OS; then wait till the login screen appears. Cto the Internet. It is preferred because it supports Step 4: Input the password as ‘user’; this shows the various Internet standards, rapid development, a selection desktop of Ubuntu (Contiki). of hardware, has an active community to help, and has commercial support bundled with an open source licence. Running the simulation Contiki is designed for tiny devices and thus the memory To run a simulation, Contiki comes with many prebuilt footprint is far less when compared with other systems. modules that can be readily run on the Cooja simulator or on It supports full TCP with IPv6, and the device’s power the real hardware platform. There are two methods of opening management is handled by the OS. All the modules of Contiki the Cooja simulator window. are loaded and unloaded during run time; it implements Method 1: In the desktop, as shown in Figure 1, double protothreads, uses a lightweight file system, and various click the Cooja icon. It will compile the binaries for the first hardware platforms with ‘sleepy’ routers (routers which sleep time and open the simulation windows. between message relays). Method 2: Open the terminal and go to the Cooja directory: One important feature of Contiki is its use of the Cooja simulator for emulation in case any of the hardware devices pradeep@localhost$] cd contiki/tools/cooja are not available. pradeep@localhost$] ant run

Installation of Contiki You can see the simulation window as shown in Figure 2. Contiki can be downloaded as ‘Instant Contiki’, which is available in a single download that contains an entire Contiki Creating a new simulation development environment. It is an Ubuntu Linux virtual To create a simulation in Contiki, go to File menu → New machine that runs in VMware Player, and has Contiki and Simulation and name it as shown in Figure 3. all the development tools, compilers and simulators used in Select any one radio medium (in this case) -> Unit Disk Contiki development already installed. Most users prefer Graph Medium (UDGM): Distance Loss and click ‘Create’. Instant Contiki over the source code binaries. The current Figure 4 shows the simulation window, which has the version of Contiki (at the time of writing this post) is 2.7. following windows. Step 1: Install VMware Player (which is free for Network window: This shows all the motes in the academic and personal use). simulated network. Step 2: Download the Instant Contiki virtual image of Timeline window: This shows all the events over the time. size 2.5 GB, approximately (http://sourceforge.net/projects/ Mote output window: All serial port outputs will be contiki/files/Instant%20Contiki/) and unzip it. shown here.

86 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 87 How To Open Gurus

Figure 1: Contiki OS desktop

Figure 3: New simulation

Figure 2: Cooja compilation

Notes window: User notes information can be put here. Figure 4: Simulation window Simulation control window: Users can start, stop and pause the simulation from here. the Contiki application and select /home/user/contiki/examples/hello-world/hello-world.c. Adding the sensor motes Then, click Compile. Once the simulation window is opened, motes can be added to Step 3: Once compiled without errors, click Create (Figure 5). the simulation using Menu: Motes-> Add Motes. Since we are Step 4: Now the screen asks you to enter the number of adding the motes for the first time, the type of mote has to be motes to be created and their positions (random, ellipse, linear specified. There are more than 10 types of motes supported by or manual positions). Contiki. Here are some of them: In this example, 10 motes are created. Click the Start ƒƒ MicaZ button in the Simulation Control window and enable the ƒƒ Sky mote's Log Output: printf() statements in the View menu of ƒƒ Trxeb1120 the Network window. The Network window shows the output ƒƒ Trxeb2520 ‘Hello World’ in the sensors. Figure 6 illustrates this. ƒƒ cc430 This is a simple output of the Network window. If the real ƒƒ ESB MicaZ motes are connected, the Hello World will be displayed ƒƒ eth11 in the LCD panel of the sensor motes. The overall output is ƒƒ Exp2420 shown in Figure 7. ƒƒ Exp1101 The output of the above Hello World application can also ƒƒ Exp1120 be run using the terminal. ƒƒ WisMote To compile and test the program, go into the hello- ƒƒ Z1 world directory: Contiki will generate object codes for these motes to run on the real hardware and also to run on the simulator if the pradeep@localhost $] cd /home/user/contiki/examples/hello- hardware platform is not available. world Step 1: To add a mote, go to Add Motes→Select any of pradeep@localhost $] make the motes given above→MicaZ mote. You will get the screen shown in Figure 5. This will compile the Hello World program in the native Step 2: Cooja opens the Create Mote Type dialogue target, which causes the entire Contiki operating system and box, which gives the name of the mote type as well as the the Hello World application to be compiled into a single Contiki application that the mote type will run. For this program that can be run by typing the following command example, click the button on the right hand side to choose (depicted in Figure 8):

86 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 87 Open Gurus How To

Figure 5: Mote creation and compilation in Contiki Figure 7: Simulation window of Contiki

Figure 8: Compilation using the terminal

Here is the C source code for the above Hello World application.

#include "contiki.h" #include /* For printf() */ /*------*/ PROCESS(hello_world_process, "Hello world process"); AUTOSTART_PROCESSES(&hello_world_process); /*------*/ Figure 6: Log output in motes PROCESS_THREAD(hello_world_process, ev, data) { pradeep@localhost$] ./hello-world.native PROCESS_BEGIN(); This will print out the following text: printf("Hello, world\n"); Contiki initiated, now starting process scheduling PROCESS_END(); Hello, world }

The program will then appear to hang, and must be The Internet of Things is an emerging technology that leads stopped by pressing Control + C. to concepts like smart cities, smart homes, etc. Implementing the IoT is a real challenge but the Contiki OS can be of great Developing new modules help here. It can be very useful for deploying applications like Contiki comes with numerous pre-built modules like automatic lighting systems in buildings, smart refrigerators, IPv6, IPV6 UDP, hello world, sensor nets, EEPROM, wearable computing systems, domestic power management for IRC, Ping, Ping-IPv6, etc. These modules can run with homes and offices, etc. all the sensors irrespective of their make. Also, there are modules that run only on specific sensors. For References example, the energy of a sky mote can be used only on [1] http://www.contiki-os.org/ Sky Motes and gives errors if run with other motes like Z1 or MicaZ. Developers can build new modules for various sensor By: T S Pradeep Kumar motes that can be used with different sensor BSPs using The author is a professor at VIT University, Chennai. He has two conventional C programming, and then be deployed in the websites http://www.nsnam.com and http://www.pradeepkumar. org. He can be contacted at [email protected]. corresponding sensors.

88 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | PB Let’s Try For U & Me

This article introduces the reader to Nix, a reliable, multi-user, multi-version, portable, reproducible and purely functional . Software enthusiasts will find it a powerful package manager for Linux and UNIX systems.

inux is versatile and full of choices. Every other day other systems. Nixpkgs, the Nix packages collection, you wake up to hear about a new distro. Most of these contains thousands of packages, many pre-compiled. Lare based on a more famous distro and use its package manager. There are many package managers like Zypper and Installation Yum for Red Hat-based systems; and apt-get for Installation is pretty straightforward for Linux and Macs; Debian-based systems; and others like Pacman and Emerge. No everything is handled magically for you by a script, but there matter how many package managers you have, you may still run are some pre-requisites like sudo, curl and bash, so make sure into or you may not be able to install multiple you have them installed before moving on. Type the following versions of the same package, especially for tinkering and command at a terminal: testing. If you frequently mess up your system, you should try out Nix, which is more than “just another package manager.” bash <(curl https://nixos.org/nix/install) Nix is a purely functional package manager. According to its site, “Nix is a powerful package manager for Linux It will ask for sudo access to create a directory named Nix. and other UNIX systems that makes package management You may see something similar to what’s shown in Figure 1. reliable and reproducible. It provides atomic upgrades and There are binary packages available for Nix but we are roll-backs, side-by-side installation of multiple versions of a looking for a new package manager, so using another package package, multi-user package management and easy set-up of manager to install it is bad form (though you can, if you want build environments.” Here are some reasons for which the site to). If you are running another distro with no binary packages recommends you ought to try Nix. while also running Darwin or OpenBSD, you have the option ƒƒ Reliable: Nix’s purely functional approach ensures that of installing it from source. To set the environment variables installing or upgrading one package cannot break other right, use the following command: packages. ƒƒ Reproducible: Nix builds packages in isolation from each ./~/.nix-profile/etc/profile.d/nix.sh other. This ensures that they are reproducible and do not have undeclared dependencies. So if a package works on Usage one machine, it will also work on another. Now that we have Nix installed, let’s use it for further testing. ƒƒ It’s great for developers: Nix makes it simple to set up To see a list of installable packages, run the following: and share build environments for your projects, regardless of what programming languages and tools you’re using. nix-env -qa ƒƒ Multi-user, multi-version: Nix supports multi-user package management. Multiple users can share a common This will list the installable packages. To search for a Nix store securely without the need to have root privileges specific package, pipe the output of the previous command to install software, and can install and use different to Grep with the name of the target package as the argument. versions of a package. Let’s search for Ruby, with the following command: ƒƒ Source/binary model: Conceptually, Nix builds packages from source, but can transparently use binaries from a nix-env -qa | grep ruby binary cache, if available. ƒƒ Portable: Nix runs on Linux, Mac OS X, FreeBSD and It informs us that there are three versions of Ruby available.

www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 89 For U & Me Let’s Try

Let’s install Ruby 2.0. There are two ways to install a package. Packages can be referred to by two identifiers. The first one is the name of the package, which might not be unique, and the second is the attribute set value. As the result of our search for the various Ruby versions showed that the name of the package for Ruby 2.0 is Ruby-2.0.0-p353, let’s try to install it, as follows: nix-env - i ruby-2.0.0-p353

It gives the following error as the output: Figure 1: Nix installation error: unable to fork: Cannot allocate memory nix-env: src/libutil/util.cc:766: int nix::Pid::wait(bool): Assertion `pid != -1’ failed. Aborted (core dumped)

As per the Nix wiki, the name of the package might not be unique and may yield an error with some packages. So we could try things out with the attribute set value. For Ruby 2.0, the attribute set value is nixpkgs.ruby2 and can be used with the following command: Figure 2: Nix search result Figure 3: Package and attribute usage nix-env -iA nixpkgs.ruby2 To update a specific package and all its In my case, while using Ruby This worked. Notice the use of -iA dependencies, use: 2.0, I replaced it with Ruby- flag when using the attribute set value. 2.0.0-p353, which was the package I talked to Nix developer Domen nix-env -uA nixpkgs.package_attribute_name name and not the attribute name. Kožar about this and he said, “Multiple Well, that’s just the tip of the packages may share the same name and To update all the installed packages, use: iceberg. To learn more, refer to the version; that’s why using attribute sets is a Nix manual http://nixos.org/nix/ better idea, since it guarantees uniqueness. nix-env -u manual. This is some kind of a downside of Nix, There is a distro named but this is how it functions :)” To uninstall a package, use: NixOS, which uses Nix for To see the attribute name and both configuration and package the package name, use the following nix-env -e package_name management. command: References nix-env -qaP | grep package_name [1] https://www.domenkozar.com/2014/01/02/getting-started-with-nix-package-manager/ [2] http://nixos.org/nix/manual/ In case of Ruby, I replaced the [3] http://nixer.ghost.io/why/ - To convince yourself to use Nix package_name with ruby2 and it yielded: By: Jatin Dhankhar TheBy: author Anil is a KumarC++ lover and Pugalia a Rubyist. His areas of interest include robotics, n i x p k g s . r u b y 2 programming and Web development. He can be reached at [email protected]. ruby-2.0.0-p353

90ep | s tember 2014 | OPEN SOURCE For You | www.OpenSourceForU.com Let’s Try For U & Me

Solve Engineering Problems with Laplace Transforms Laplace transforms are integral mathematical transforms widely used in physics and engineering. In this 21st article in the series on mathematics in open source, the author demonstrates Laplace transforms through Maxima.

n higher mathematics, transforms play an important role. (%o1) 1/s A transform is mathematical logic to transform or convert (%i2) string(laplace(t, t, s)); Ia mathematical expression into another mathematical (%o2) 1/s^2 expression, typically from one domain to another. Laplace (%i3) string(laplace(t^2, t, s)); and Fourier are two very common examples, transforming (%o3) 2/s^3 from the time domain to the frequency domain. In general, (%i4) string(laplace(t+1, t, s)); such transforms have their corresponding inverse transforms. (%o4) 1/s+1/s^2 And this combination of direct and inverse transforms is very (%i5) string(laplace(t^n, t, s)); powerful in solving many real life engineering problems. The Is n + 1 positive, negative, or zero? focus of this article is Laplace and its inverse transform, along with some problem-solving insights. p; /* Our input */ (%o5) gamma(n+1)*s^(-n-1) The Laplace transform (%i6) string(laplace(t^n, t, s)); Mathematically, the Laplace transform F(s) of a function f(t) Is n + 1 positive, negative, or zero? is defined as follows: n; /* Our input */ (%o6) gamma_incomplete(n+1,0)*s^(-n-1) …where ‘t’ represents time and ‘s’ represents complex (%i7) string(laplace(t^n, t, s)); angular frequency. Is n + 1 positive, negative, or zero? To demonstrate it, let’s take a simple example of f(t) = 1. Substituting and integrating, we get F(s) = 1/s. Maxima has z; /* Our input, making it non-solvable */ the function laplace() to do the same. In fact, with that, we (%o7) ‘laplace(t^n,t,s) can choose to let our variables ‘t’ and ‘s’ be anything else as (%i8) string(laplace(1/t, t, s)); /* Non-solvable */ well. But, as per our mathematical notations, preserving them (%o8) ‘laplace(1/t,t,s) as ‘t’ and ‘s’ would be the most appropriate. Let’s start with (%i9) string(laplace(1/t^2, t, s)); /* Non-solvable */ some basic Laplace transforms. (Note that string() has been (%o9) ‘laplace(1/t^2,t,s) used to just flatten the expression.) (%i10) quit();

$ maxima -q In the above examples, the expression is preserved as is, in (%i1) string(laplace(1, t, s)); case of non-solvability.

www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 91 For U & Me Let’s Try

laplace() is designed to understand various symbolic (%o4) [s = -w,s = w] functions, such as sin(), cos(), sinh(), cosh(), log(), exp(), (%i5) string(solve(denom(laplace(cos(w*t), t, s)), s)); delta(), erf(). delta() is the Dirac delta function, and erf() is the (%o5) [s = -%i*w,s = %i*w] error function—others being the usual mathematical functions. (%i6) string(solve(denom(laplace(cosh(w*t), t, s)), s)); (%o6) [s = -w,s = w] $ maxima -q (%i7) string(solve(denom(laplace(exp(w*t), t, s)), s)); (%i1) string(laplace(sin(t), t, s)); (%o7) [s = w] (%o1) 1/(s^2+1) (%i8) string(solve(denom(laplace(log(w*t), t, s)), s)); (%i2) string(laplace(sin(w*t), t, s)); (%o8) [s = 0] (%o2) w/(w^2+s^2) (%i9) string(solve(denom(laplace(delta(w*t), t, s)), s)); (%i3) string(laplace(cos(t), t, s)); (%o9) [] (%o3) s/(s^2+1) (%i10) string(solve(denom(laplace(erf(w*t), t, s)), s)); (%i4) string(laplace(cos(w*t), t, s)); (%o10) [s = 0] (%o4) s/(w^2+s^2) (%i11) quit(); (%i5) string(laplace(sinh(t), t, s)); (%o5) 1/(s^2-1) Involved Laplace transforms (%i6) string(laplace(sinh(w*t), t, s)); laplace() also understands derivative() / diff(), integrate(), (%o6) -w/(w^2-s^2) sum(), and ilt() - the inverse Laplace transform. Here are some (%i7) string(laplace(cosh(t), t, s)); interesting transforms showing the same: (%o7) s/(s^2-1) (%i8) string(laplace(cosh(w*t), t, s)); $ maxima -q (%o8) -s/(w^2-s^2) (%i1) laplace(f(t), t, s); (%i9) string(laplace(log(t), t, s)); (%o1) laplace(f(t), t, s) (%o9) (-log(s)-%gamma)/s (%i2) string(laplace(derivative(f(t), t), t, s)); (%i10) string(laplace(exp(t), t, s)); (%o2) s*’laplace(f(t),t,s)-f(0) (%o10) 1/(s-1) (%i3) string(laplace(integrate(f(x), x, 0, t), t, s)); (%i11) string(laplace(delta(t), t, s)); (%o3) ‘laplace(f(t),t,s)/s (%o11) 1 (%i4) string(laplace(derivative(sin(t), t), t, s)); (%i12) string(laplace(erf(t), t, s)); (%o4) s/(s^2+1) (%o12) %e^(s^2/4)*(1-erf(s/2))/s (%i5) string(laplace(integrate(sin(t), t), t, s)); (%i13) quit(); (%o5) -s/(s^2+1) (%i6) string(sum(t^i, i, 0, 5)); Interpreting the transform (%o6) t^5+t^4+t^3+t^2+t+1 A Laplace transform is typically a fractional expression (%i7) string(laplace(sum(t^i, i, 0, 5), t, s)); consisting of a numerator and a denominator. Solving (%o7) 1/s+1/s^2+2/s^3+6/s^4+24/s^5+120/s^6 the denominator, by equating it to zero, gives the various (%i8) string(laplace(ilt(1/s, s, t), t, s)); complex frequencies associated with the original function. (%o8) 1/s These are called the poles of the function. For example, the (%i9) quit(); Laplace transform of sin(w * t) is w/(s^2 + w^2), where the denominator is s^2 + w^2. Equating that to zero and solving Note the usage of ilt() - inverse Laplace transform in the %i8 it, gives the complex frequency s = +iw, -iw; thus, indicating of the above example. Calling laplace() and ilt() one after the that the frequency of the original expression sin(w * t) is ‘w’, other cancels their effect—that is what is meant by inverse. Let’s which indeed it is. Here are a few demonstrations of the same: look into some common inverse Laplace transforms.

$ maxima -q Inverse Laplace transforms (%i1) string(laplace(sin(w*t), t, s)); (%o1) w/(w^2+s^2) $ maxima -q (%i2) string(denom(laplace(sin(w*t), t, s))); /* The Denominator (%i1) string(ilt(1/s, s, t)); */ (%o1) 1 (%o2) w^2+s^2 (%i2) string(ilt(1/s^2, s, t)); (%i3) string(solve(denom(laplace(sin(w*t), t, s)), s)); /* The (%o2) t Poles */ (%i3) string(ilt(1/s^3, s, t)); (%o3) [s = -%i*w,s = %i*w] (%o3) t^2/2 (%i4) string(solve(denom(laplace(sinh(w*t), t, s)), s)); (%i4) string(ilt(1/s^4, s, t));

92ep | s tember 2014 | OPEN SOURCE For You | www.OpenSourceForU.com Let’s Try For U & Me

(%o4) t^3/6 $ maxima -q (%i5) string(ilt(1/s^5, s, t)); (%i1) string(laplace(diff(f(t), t) + f(t) = exp(t), t, s)); (%o5) t^4/24 (%o1) s*’laplace(f(t),t,s)+’laplace(f(t),t,s)-f(0) = 1/(s-1) (%i6) string(ilt(1/s^10, s, t)); (%o6) t^9/362880 Substituting f(0) as 0, and then simplifying, we get (%i7) string(ilt(1/s^100, s, t)); laplace(f(t),t,s) = 1/((s-1)*(s+1)), for which we do an inverse (%o7) t^99/933262154439441526816992388562667004907159682643816 Laplace transform: 21468592963895217599993229915608941463976156518286253697920827 2237582511852109168640000000000000000000000 (%i2) string(ilt(1/((s-1)*(s+1)), s, t)); (%i8) string(ilt(1/(s-a), s, t)); (%o2) %e^t/2-%e^-t/2 (%o8) %e^(a*t) (%i3) quit(); (%i9) string(ilt(1/(s^2-a^2), s, t)); (%o9) %e^(a*t)/(2*a)-%e^-(a*t)/(2*a) That gives us f(t) = (e^t – e^-t) / 2, i.e., sinh(t), which (%i10) string(ilt(s/(s^2-a^2), s, t)); definitely satisfies the given differential equation. (%o10) %e^(a*t)/2+%e^-(a*t)/2 Similarly, we can solve equations with integrals. And not (%i11) string(ilt(1/(s^2+a^2), s, t)); just integrals, but also equations with both differentials and Is a zero or nonzero? integrals. Such equations come up very often when solving problems linked to electrical circuits with resistors, capacitors n; /* Our input */ and inductors. Let’s again look at a simple example that (%o11) sin(a*t)/a demonstrates the fact. Let’s assume we have a 1 ohm resistor, (%i12) string(ilt(s/(s^2+a^2), s, t)); a 1 farad capacitor, and a 1 henry inductor in series being Is a zero or nonzero? powered by a sinusoidal voltage source of frequency ‘w’. What would be the current in the circuit, assuming it to be zero at t = n; /* Our input */ 0? It would yield the following equation: R * i(t) + 1/C * ∫ i(t) (%o12) cos(a*t) dt + L * di(t)/dt = sin(w*t), where R = 1, C = 1, L =1. (%i13) assume(a < 0) or assume(a > 0)$ So, the equation can be simplified to i(t) + ∫ i(t) dt + di(t)/ (%i14) string(ilt(1/(s^2+a^2), s, t)); dt = sin(w*t). Now, following the procedure as described (%o14) sin(a*t)/a above, let’s carry out the following steps: (%i15) string(ilt(s/(s^2+a^2), s, t)); (%o15) cos(a*t) $ maxima -q (%i16) string(ilt((s^2+s+1)/(s^3+s^2+s+1), s, t)); (%i1) string(laplace(i(t) + integrate(i(x), x, 0, t) + diff(i(t), (%o16) sin(t)/2+cos(t)/2+%e^-t/2 t) = sin(w*t), t, s)); (%i17) string(laplace(sin(t)/2+cos(t)/2+%e^-t/2, t, s)); (%o1) s*’laplace(i(t),t,s)+’laplace(i(t),t,s)/ (%o17) s/(2*(s^2+1))+1/(2*(s^2+1))+1/(2*(s+1)) s+’laplace(i(t),t,s)-i(0) = w/(w^2+s^2) (%i18) string(rat(laplace(sin(t)/2+cos(t)/2+%e^-t/2, t, s))); (%o18) (s^2+s+1)/(s^3+s^2+s+1) Substituting i(0) as 0, and simplifying, we get (%i19) quit(); laplace(i(t), t, s) = w/((w^2+s^2)*(s+1/s+1)). Solving that by inverse Laplace transform, we very easily get the complex Observe that if we take the Laplace transform of the expression for i(t) as follows: above %o outputs, they would give back the expressions, which are input to ilt() of the corresponding %i’s. %i18 (%i2) string(ilt(w/((w^2+s^2)*(s+1/s+1)), s, t)); specifically shows one such example. It does laplace() of Is w zero or nonzero? the output at %o16, giving back the expression, which was input to ilt() of %i16. n; /* Our input: Non-zero frequency */ (%o2) w^2*sin(t*w)/(w^4-w^2+1)-(w^3-w)*cos(t*w)/(w^4-w^2+1)+%e^- Solving differential and integral equations (t/2)*(sin(sqrt(3)*t/2)*(-(w^3-w)/(w^4-w^2+1)-2*w/(w^4-w^2+1))/ Now, with these insights, we can easily solve many sqrt(3)+cos(sqrt(3)*t/2)*(w^3-w)/(w^4-w^2+1)) interesting and otherwise complex problems. One of (%i3) quit(); them is solving differential equations. Let’s explore a simple example of solving f’(t) + f(t) = e^t, where f(0) = By: Anil Kumar Pugalia 0. First, let’s take the Laplace transform of the equation. TheBy: author Anil is aKumar gold medallist Pugalia from NIT Warangal and IISc Bangalore and he is also a hobbyist in open source hardware Then substitute the value for f(0), and simplify to obtain and software, with a passion for mathematics. Learn more the Laplace of f(t), i.e., F(s). Finally, compute the inverse about him and his experiments at http://sysplay.in. He can be reached at [email protected]. Laplace transform of F(s) to get the solution for f(t).

www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 93 For U & Me Let's Try

Your Shell with Zsh and Oh-My-Zsh Discover the Z shell, a powerful scripting language, which is designed for interactive use.

shell (zsh) is a powerful interactive login shell and command interpreter for shell scripting. A big Zimprovement over older shells, it has a lot of new features and the support of the Oh-My-Zsh framework that makes using the terminal fun. Released in 1990, the zsh shell is fairly new compared to its older counterpart, the bash shell. Although more than a decade has passed since its release, it is still very popular among programmers and developers who use the command- line interface on a daily basis. Why zsh is better than the rest Most of what is mentioned below can probably be implemented or configured in the bash shell as well; however, it is much more powerful in the zsh shell. Advanced tab completion Tab completion in zsh supports the command line option for the auto completion of commands. Pressing the tab key twice line look stunning. In some terminals, existing commands enables the auto complete mode, and you can cycle through are highlighted in green and those typed incorrectly are the options using the tab key. highlighted in red. Also, quoted text is highlighted in yellow. You can also move through the files in a directory with All this can be configured further according to your needs. the tab key. Prompts on zsh can be customised to be right-aligned, Zsh has tab completion for the path of directories or files left-aligned or as multi-lined prompts. in the command line too. Another great feature is that you can switch paths by Globbing using 1 to switch to the previous path, 2 to switch to the Wikipedia defines globbing as follows: “In computer ‘previous, previous’ path and so on. programming, in particular in a UNIX-like environment, the term globbing is sometimes used to refer to pattern Real time highlighting and themeable prompts matching based on wildcard characters.” Shells To include real time highlighting, clone the zsh-syntax- before zsh also offered globbing; however, zsh offers highlighting repository from github (https://github.com/zsh- extended globbing. Extra features can be enabled if the users/zsh-syntax-highlighting). This makes the command- EXTENDEDGLOB option is set.

94 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 95 Let's Try For U & Me

Here are some examples of the extended globbing offered by zsh. The ^ character is used to negate any pattern following it. Figure 1: Tab completion for command options

setopt EXTENDEDGLOB # Enables extended globbing in zsh. ls *(.) Figure 2: Tab completion for files # Displays all regular files. commands. Most other shells have aliases but zsh supports ls -d ^*.c # Displays global aliases. These are aliases that are substituted anywhere all directories and files that are not cpp files. in the line. Global aliases can be used to abbreviate ls -d ^*.* # Displays frequently-typed usernames, hostnames, etc. Here are some directories and files that have no extension. examples of aliases: ls -d ^file # Displays everything in directory except file called file. alias -g mr=’rm’ ls -d *.^c alias -g TL=’| tail -10’ # Displays files with extensions except .c files. alias -g NUL=”> /dev/null 2>&1”

An expression of the form matches a range of Installing zsh integers. Also, files can be grouped in the search pattern. To install zsh in Ubuntu or Debian-based distros, type the following: % ls (foo|bar).* bar.o foo.c foo.o sudo apt-get update && sudo apt-get install zsh # install zsh % ls *.(c|o|pro) chsh -s /bin/zsh # to make zsh your default shell bar.o file.pro foo.c foo.o main.o q.c To install it on SUSE-based distros, type: To exclude a certain file from the search, the ‘~’ character can be used. sudo zypper install zsh finger yoda | grep zsh % ls *.c foo.c foob.c bar.c Configuring zsh % ls *.c~bar.c The .zshrc file looks something like what is shown in Figure 4. foo.c foob.c Add your own aliases for commands you use frequently. % ls *.c~f* bar.c Customising zsh with Oh-My-Zsh Oh-My-Zsh is believed to be an open source community-driven These and several more extended globbing features can framework for managing the zsh configuration. Although zsh is help immensely while working through large directories powerful in comparison to other shells, its main attraction is the themes, plugins and other features that come with it. Case insensitive matching To install Oh-My-Zsh you need to clone the Oh-My-Zsh Zsh supports pattern matching that is independent of repository from Github (https://github.com/robbyrussell/ whether the letters of the alphabet are upper or lower case. oh-my-zsh). A wide range of themes are available so there is Zsh first surfs through the directory to find a match, and if something for everybody. one does not exist, it carries out a case insensitive search for To clone the repository from Github, use the following the file or directory. command. This installs Oh-My-Zsh in ~/.oh-my-zsh (a hidden directory in your home directory). The default path can be Sharing of command history among running shells changed by setting the environment variable for zsh using Running shells share command history, thereby eradicating export ZSH = /your/path the difficulty of having to remember the commands you typed earlier in another shell. git clone https://github.com/robbyrussell/oh-my-zsh.git

Aliases To install Oh-My-Zsh via curl, type: Aliases are used to abbreviate commands and command options that are used very often or for a combination of curl -L http://install.ohmyz.sh | sh

94 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 95 For U & Me Let's Try

Figure 3: View previous paths

Figure 4: ~/.zshrc file

you desire and then source Oh-My-Zsh. If you do not want any theme enabled, set ZSH_THEME = “”. If you can’t decide Figure 5: Setting aliases in ~/.zshrc file on a theme, you can set ZSH_THEME = “random”. This will change the theme every time you open a shell and you can To install it via wget, type: decide upon the one that you find most suitable for your needs. To make your own theme, copy any one of the existing wget —no-check-certificate http://install.ohmyz.sh -O - | sh themes from the themes/ directory to a new file with a “zsh- theme” extension and make your changes to that. To customise zsh, create a new zsh configuration, i.e., a A customised theme is shown in Figure 6. ~/.zshrc file by copying any of the existing templates provided: Here, the user name, represented by %n, has been set to the colour green and the computer name, represented by %m, cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc has been set to the colour cyan. This is followed by the path represented by %d. The prompt variable then looks like this... Restart your zsh terminal to view the changes. PROMPT=’ $fg[green]%n $fg[red]at $fg[cyan]%m--- Plugins >$fg[yellow]%d: ‘ To check out the numerous plugins offered in Oh-My-Zsh, you can go to the plugins directory in ~/.oh-my-zsh. The prompt can be changed to incorporate spacing, and To enable these plugins, add them to the ~/.zshrc file and git states, battery charge, etc, by declaring functions that do then source them. the same. For example, here, instead of printing the entire path cd ~/.oh-my-zsh including /home/darshana, we can define a function such that vim ~/.zshrc if PWD detects $HOME, it replaces the same with “~” source ~/.zshrc function get_pwd() { If you want to install some plugin that is not present in echo “${PWD/$HOME/~}” the plugins directory, you can clone the plugin from Github or } install it using wget or curl and then source the plugin. To view the status of the current Git repository, the Themes following code can be used: To view the themes in zsh go to the themes/ directory. To change your theme, set ZSH_THEME in ~/.zshrc to the theme function git_prompt_info() {

96 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 97

For U & Me Open Strategy Panasonic Looks to Engage with Developers in India! Panasonic entered the Indian smartphone market last year. In just one year, the company has assessed the potential of the market and has found that it could make India the headquarters for its smartphone division. But this cannot happen without that bit of extra effort from the company. While Panasonic is banking big on India’s favourite operating system, Android, it is also leaving no stone unturned to provide a unique user experience on its devices. Diksha P Gupta from Open Source For You spoke to Pankaj Rana, head, smartphones and tablets, Panasonic India Pvt Ltd, to get a clearer picture of the company’s growth plans. Excerpts…

about the strategy, Pankaj Rana, head, smartphones and tablets, Panasonic India Pvt Ltd, says, “We are banking on Android purely because it provides the choice of customisation. Based on this ability of Android, we have created a very different user experience for Panasonic smartphones.” What Rana is referring to here is the single fit-home UI launched by Panasonic. He explains, “While we have provided the standard Android UI in the feature phones, the highly-efficient fit-home UI is available on Panasonic smartphones. When working on the standard Android UI, users need to use both hands to perform any task. However, the fit-home UI allows single-hand operations, making it easy for the user to function.” Yet another feature of the UI is that it can be operated in the landscape mode. Rana claims that many phones do not allow the use of various functions like settings, et al, in the landscape mode. He says, “We have kept the comfort of the users as our top priority and, hence, designed the UI in such a way that it offers a tablet-like experience as well. The Panasonic Eluga is a 12.7cm (5-inch) phone. This kind of a UI will be a great advantage on big screen devices. For users of feature phones who are migrating to smartphones now, this kind of UI Pankaj Rana, head, smartphones and tablets, Panasonic India Pvt Ltd makes the transition easier.”

anasonic is all set to launch 15 smartphones and Coming soon: An exclusive Panasonic eight feature phones in India this year. While the Well, if you thought the unique user experience was P company will keep its focus on the smartphone the end of the show, hold on. There’s more coming… segment, it has no plans of losing its feature phone The company plans to leave no stone unturned lovers as Panasonic believes that there is still scope for when it comes to making its Android experience the latter in the Indian market. That said, Panasonic will complete for the Indian region. Rana reveals, “We invest more energy in grabbing what it hopes will be a are planning to come up with a Panasonic exclusive 5 per cent share in the Indian smartphone market. And app store, which should come to existence in the that will happen with the help of Android. Speaking next 3-4 months.”

98 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com Open Strategy For U & Me “The company plans to do the hiring for the in-house team within the next six months. The team may comprise about 100 people. Rana clarifies that the developers hired in India are going to be based in Gurgaon, Bengaluru and Hyderabad.”

When it comes to the development for this app store, Panasonic will look at hiring in-house developers, as well as associate with third party developers. Rana says, “We will look at all possible ways to make our app ecosystem an enriched one. Just for the record, this UI has been built within the company, with engineers from various facilities including India, Japan and Vietnam. For the exclusive app store that we are planning to build, we will have some third-party developers. But besides that, we plan to develop our in-house team as well. Right now, we have about 25 software engineers working with us in India, who are from Japan. We also have some Vietnamese resources working for us.” The company plans to do the hiring for the in- house team within the next six months. The team may comprise about 100 people. Rana clarifies that the developers hired in India are going to be based in Gurgaon, Bengaluru and Hyderabad. He says, “We already have about 20 developers in Bengaluru, who are on third party rolls. We are in the process of switching them to the company’s rolls over the next couple of months. Similarly, we have about 10 developers in Gurgaon. In addition, our R&D team in Vietnam has 70 members. We are also planning to shift the Vietnam operations to India, making the country our smartphone headquarters.” To take the idea of the Panasonic-exclusive app store further, the company is planning some developers’ engagement activities this November and December.

The consumer is the king! While Rana asserts that Panasonic can make one of the best offerings in the smartphone world, he recognises that consumers are looking for something different every time, when it comes to these fancy devices. He says, “Right now, companies are working on the UI level to offer that newness in the experience. But six months down the line, things will not remain the same. The situation is bound to change and, to survive in this business, developers need to predict the tastes of the consumers. But for now, it is about providing an easy experience, so that the feature phone users who are looking to migrate to smartphones find it convenient enough.”

www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 99 TIPS &TRICKS

Convert images to PDF height: 100%; border: none; outline: none; margin: 0; Often, your scanned copies will be in an image padding: 90px;” autofocus placeholder=”wake up Neo...” /> format that you would like to convert to the PDF format. In Linux, there is an easy-to-use tool called convert that can You Web browser-based notepad is ready. convert any image to PDF. The following example shows you how: —Chintan Umarani, [email protected] $convert scan1.jpg scan1.pdf How to find a swap partition or To convert multiple images into one PDF file, use the file in Linux following command: Swap space can be a dedicated swap partition, a swap file, or a combination of swap partitions and swap files. $convert scan*.jpg scanned_docs.pdf To find a swap partition or file in Linux, use the following command: The ‘convert’ tool comes with the imagemagick package. If you do not find the convert command on your swapon -s system, you will need to install imagemagick. Or… —Madhusudana Y N, [email protected] cat /proc/swaps

Your own notepad …and the output will be something like what’s shown Here is a simple and fast method to create a notepad- below: like application that works in your Web browser. All you need is a browser that supports HTML 5 and the commands Filename Type Size Used Priority mentioned below. Open your HTML 5 supported Web browser and paste /dev/sda5 partition 2110460 0 -1 the following code in the address bar: Here, the swap is a partition and not a file. data:text/html, —Sharad Chhetri, Then use the following code: [email protected]

data:text/html, Text Editor process basis, use the following command:

And finally… pidstat -d -h -r -u -p

data:text/html,