Jaap Bloem | Menno van Doorn | Erik van OmmerenDescrição completa
open source magazine
Descripción: Los usuarios cada vez se decantan más por el 'software libre', porque lo adaptan a sus necesidades, corrigen sus errores... Descárgate este ebook para conocer todo sobre 'open source'.
Open Source Plans for FreeEnergy DevicesFull description
majalah
Descrição: virtualizacion open source
Descrição completa
Descripción: Open Source POS Installation Guide.pdf
Open Source POS Installation Guide.pdfFull description
A case-study exploring blockchain applications in healthcare, specifically electronic medical records (EMR). The study was lead by Alex Singleton, master of science in information systems technolog...
Understanding the Document Object Model (DOM) in Mozilla
40
Introducing AngularJS
45
Use Bugzilla to Manage Defects in Software
48
An Introduction to Device Drivers in the Linux Kernel
52
Creating Dynamic Web
35 Experimenting with More Functions in Haskell
Portals Using Joomla and WordPress
56
Compile a GPIO Control Application and Test It On the Raspberry Pi
Admin 59
Use Pound on RHEL to Balance the Load on Web Servers
67 Boost the Performance of CloudStack with Varnish
74 Use Wireshark to
63 Why We Need to Handle Bounced Emails
Detect ARP Spoofing
77
Make Your Own PBX with Asterisk
Open Gurus 80
How to Make Your USB Boot
08 You Said It...
25 Editorial Calendar
09 Offers of the Month
100 Tips & Tricks
Contiki OS Connecting
10 New Products
105 FOSS Jobs
Microcontrollers to the
13 FOSSBytes
with Multiple ISOs
86
REGULAR FEATURES
Internet of Things
4 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
YOU SAID IT Online access to old issues I want all the issues of OSFY from 2011, right up to the current issue. How can I get these online, and what would be the cost? —c kiran kumar; [email protected] ED: It feels great to know that we have such valuable readers. Thank you, Kiran, for bringing this request to us. You can avail all the back issues of Open Source For You in e-zine format from www.ezines.efyindia.com
Request for a sample issue I am with a company called Relia-Tech, which is a brickand-mortar computer service company. We are interested in subscribing to your magazine. Would you be willing to send us a magazine to check out before we commit to anything? —Lindsay Steele; [email protected] ED: Thanks for your mail. You can visit our website www.ezine. lfymag.com and access our sample issue.
A ‘thank-you’ and a request for more help I began reading your magazine in my college library and thought of offering some feedback. I was facing a problem with Oracle Virtual Box, but after reading an article on the topic in OSFY, the task became so easy. Thanks for the wonderful help. I am also trying to set up my local (LAN-based) GIT server. I have no idea how to set it up. I have worked a little with GitHub. I do wish your magazine would feature content on this topic in upcoming editions. —Abhinav Ambure; [email protected] ED: Thank you so much for your valuable feedback. We really value our readers and are glad that our content proves
Share Your
helpful to them. We will surely look into your request and try to include the topic you have asked for in upcoming issues. Keep reading OSFY and continue sending us your feedback!
Annual subscription I’ve bought the July 2014 issue of OSFY and I loved it. I want the latest version of the Ubuntu 14.04 LTS and the programming tools (JDK and other tools for C, C+, Java and Python). Also, how can I subscribe to your magazine for one year and can I get it at my village (address enclosed)? —Parveen Kumar; [email protected] ED: Thank you for the compliments. We're glad to know that you enjoy reading our magazine. We will definitely look into your request. Also, I am forwarding your query regarding subscribing to the magazine to the concerned team. Please feel free to get back to us in case of any other suggestions or questions. We're always happy to help.
Availability of OSFY in your city I want to purchase Open Source For You for the library in my organisation but I am unable to find copies in the city I live in (Jabalpur in Madhya Pradesh). I cannot go in for the subscription as well. Please give me the name of the distributor or dealer in my city through whom I can purchase the magazine. —Gaurav Singh; [email protected] ED: We have a website where you can locate the nearest store in your city that supplies Open Source For You. Do log on to http://ezine.lfymag.com/listwholeseller.asp. You will find there are two dealers of the magazine in your city: Sahu News Agency (Sanjay Sahu, Ph: 09301201157) and Janta News Agency (Harish, Ph: 09039675118). They can ensure regular supply of the magazine to your organisation.
Please send your comments or suggestions to:
The Editor, Open Source For You, D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020, Phone: 011-26810601/02/03, Fax: 011-26817563, Email: [email protected] 8 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
offe
rS
THE monTH 2000 Rupees Coupon
One month free
(Free Trial Coupon)
Free Dedicated Server Hosting for one month Subscribe for our Annual Package of Dedicated Server Hosting & enjoy one month free service
Hurry!till 30th alid Offer vmber 2014! Septe
For more information, call us on 1800-209-3006/ +91-253-6636500
No condition attached for trial of our cloud platform Hurry!till 30th valid 2014! r e ff O mber Septe
For more information, call us on 1800-212-2022 / +91-120-666-7718
www.cloudoye.com
www.esds.co.in
35%
Get 10% discount
off & more Reseller package special offer !
Hurry!till 30th alid Offer vmber 2014! Septe
Free Dedicated hosting/VPS for one month. Subscribe for annual package of Dedicated hosting/VPS and get one month FREE Contact us at 09841073179 or Write to [email protected]
Get 35% off on course fees and if you appear for two Red Hat exams, the second shot is free Hurry!till 30th alid Offer vmber 2014! Septe
Get 25% Pay Annually & get 12 Month Free Services on Dedicated Server Hosting
Hurry!till 30th alid Offer vmber 2014! Septe
Subscribe for the Annual Packages of Dedicated Server Hosting & Enjoy Next 12 Months Free Services For more information, call us on 1800-212-2022 / +91-120-666-7777
www.goforhosting.com
To advertise here, contact Omar on +91-995 888 1862 or 011-26810601/02/03 or Write to [email protected]
Off
PACKWEB
PACK WEB HOSTING ProX
Time to go PRO now
Considering VPS or a Dedicated Server? Save Big !!! And go with our ProX Plans
Hurry!till 30th alid Offer vmber 2014! Septe
25% Off on ProX Plans - Ideal for running High Traffic or E-Commerce Website Coupon Code : OSFY2014 Contact us at 98769-44977 or Write to [email protected]
www.prox.packwebhosting.com
Pay the most competitive Fee
EMBEDDED SOFTWARE DEVELOPMENT COURSES AND WORKSHOPS Embedded RTOS -Architecture, Internals and Programming - on ARM platform Date: 20-21 Sept’ 2014 ( 2 days program)
Faculty: Mr. Babu Krishnamurthy Visiting Faculty / CDAC/ ACTS with 18 years of Industry and Faculty Experience Contact us at +91-98453-65845 or Write to [email protected]
FOSSBYTES Powered by www.efytimes.com
Ubuntu 14.04.1 LTS is out
The Ubuntu 14.04 LTS has been around for quite some time now and most people must have upgraded it. Another smaller update is ready – 14.04.1. Canonical has announced that this Ubuntu update fixes many bugs and includes security updates. There is also a list of bugs and other updates in Ubuntu 14.04.1 that you might want to have a look at, in order to see the scope of this update. If you haven’t upgraded to 14.04.1 yet, do so as soon as possible. It is a worthy upgrade if you use an older version of Ubuntu.
Android Device Manager makes it easier to search for lost phones!
Google has created an update in Android Device Manager that will help the device’s users better security. This latest version is called 1.3.8. It will help add a phone number in the remote locking screen, and the ‘lock screen’ password can also be changed. An optional message can also be set up. If the phone number is added, then a big green button will appear on the lock screen saying ‘Call owner’. If the lost phone is found by someone, then the owner can be easily contacted. Earlier, only a message could be added by the users. The call-back number can be set up through the Android Device Manager app as well as the Web interface, if another Android device is not present. Both these message and call-back features are optional, though. But it’s highly recommended that these features are used so that a lost phone can be easily found.
Ubuntu’s Amazon shopping feature complies with UK Data Protection Act
The independent body investigating the implementation of Ubuntu’s Unity Shopping Lens feature and its compliance with the UK Data Protection Act (DPA) of 1998 has found no instances of Canonical being in breach of the act. Ubuntu’s controversial ‘Amazon shopping’ feature has been found to be compliant with relevant data protection and privacy laws in the UK, something that was checked in response to a complaint filed by blogger Luis de Sousa last year. Notably, the feature sends out queries made in the Dash to an intermediary Canonical server, which sends it forward to Amazon. The e-commerce giant then returns product suggestions matching the query back to the Dash. The feature also sends across non-identifiable location data out in the process.
VLC 2.1.5 has been released
VideoLAN has announced the release of the final update in the 2.1.x series of its popular open source, cross-platform media player and streaming media server: the VLC media player. VLC 2.1.5 is now available for download and installation on Windows, Mac and Linux operating systems. Notably, the next big release for the VLC media player will be that of the 2.2.x branch. A careful look at the change log reveals that although the VLC 2.1.5 update has been released across multiple platforms, the most noticeable improvements are for OS X users. Others could consider it as a minor update. For OS X users, VLC 2.1.5 brings about additional stability to the Qtsound capture module as well as improved support for Reti. Other notable changes (for the OS X platform) include compilation fixes for OS/2 operating systems. Also, MP3 file conversions will no longer be renamed ‘.raw’ under the Qt interface following the update. A few decoder fixes will now benefit DxVA2 sample decoding, MAD resistance in broken MP3 streams and PGS alignment tweaks for MKV. In terms of security, the new release comes with fixes for GNU TLS and libpng as well. One should remember that VLC is a portable, free and open source, cross-platform media player and streaming media server written by the VideoLAN project that supports many audio and video compression methods and file formats. It comes with a large number of free decoding and encoding libraries, thereby eliminating the need of finding or calibrating proprietary plugins.
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 13
FOSSBYTES Here’s what’s new in Linux 3.16 The founder of Linux, Linus Torvalds, announced the release of the stable build of Linux 3.16 recently. This version is known as ‘Shuffling Zombie Juror’ for developers. There are a host of improvements and new features in this new stable build of Linux. These include new and improved drivers, and some complex integral improvements like a unified control hierarchy. This new Linux 3.16 stable version will be ideal for the Ubuntu Linux Kernel 14.10. LTS version users will get this update once the 14.10 kernel is released.
Shutter 0.92 for Linux released and fixes a number of bugs Users have had some trouble using the popular Shutter screenshot tool for Linux owing to the many irritating bugs and stability issues that came along. But they are in for a pleasant surprise as developers have now released a new bug fix for the tool that aims to address some of its more prominent issues. The new bug fix—Shutter 0.92—is now available for download for the Linux platform and a number of stability issues have been dealt with for good.
Open source community irked by broken Linux kernel patches
One of the many fine threads that bind the open source community is avid participation and cooperation between developers across the globe, with the common goal of improving the Linux kernel. However, not everyone is actually trying to help out there, as recent happenings suggest. Trolls exist even in the Linux community, and one that has managed to make a big impression is Nick Krause. Krause’s recent antics have led to significant bouts of frustration among Linux kernel maintainers. Krause continuously tries to get broken patches past the maintainers—only his goals are not very clear at the moment. Many developers believe that Krause aims to damage the Linux kernel. While that might be a distant dream for him (at least for now), he has managed to irk quite a lot of people, slowing down the whole development process because of the need to keep fixing broken patches introduced by him.
Calendar of forthcoming events Name, Date and Venue
Description
Contact Details and Website
4th Annual Datacenter Dynamics Converged. September 18, 2014; Bengaluru
The event aims to assist the community in the data centre domain by exchanging ideas, accessing market knowledge and launching new initiatives.
Gartner Symposium IT Xpo, October 14-17, 2014; Grand Hyatt, Goa
CIOs and senior IT executives from across the world will gather at this event, which offers talks and workshops on new ideas and strategies in the IT industry.
Website: http://www.gartner.com
Open Source India, November 7-8, 2014; NIMHANS Center, Bengaluru
Asia’s premier open source conference that aims to nurture and promote the open source ecosystem across the sub-continent.
This is one of the world’s leading business IT events, and offers a combination of services and benefits that will strengthen the Indian IT and ITES markets.
Website: http://www.cebit-india.com/
5th Annual Datacenter Dynamics Converged; December 9, 2014; Riyadh
The event aims to assist the community in the datacentre domain by exchanging ideas, accessing market knowledge and launching new initiatives.
Hostingconindia December 12-13, 2014; NCPA, Jamshedji Bhabha Theatre, Mumbai
This event will be attended by Web hosting companies, Web design companies, domain and hosting resellers, ISPs and SMBs from across the world.
Website: http://www.hostingcon.com/ contact-us/
According to Sousa, the Shopping Lens implementation “…contravened a 1995 EU Directive on the protection of users’ personal data.” Sousa had provided a number of instances to put forward his point. Initially, Sousa began by reaching out to Canonical for clarification but to no avail. He was finally forced to file a complaint with the Information Commissioner’s Office regarding his security concerns. Finally, the ICO responded to Sousa’s need for clarification by clearly stating that the Shopping Lens feature complies with the DPA (Data Protection Act) very well and in no way breaches users’ privacy.
Oracle launches Solaris 11.2 with OpenStack support
Oracle Corp recently launched the latest version of its Solaris enterprise UNIX platform: Solaris 11.2. Notably, this new version was in beta since April. The latest release comes with several key enhancements—the support for OpenStack as well as software-defined networking (SDN). Additionally, there are various security, performance and compliance enhancements introduced in Oracle’s new release. Solaris 11.2 comes with OpenStack integration, which is perhaps its most crucial enhancement. The latest version runs the most recent version of the popular toolbox for building clouds: OpenStack Havana. Meanwhile, the inclusion of software-defined networking (SDN) support is seen as Oracle’s ongoing effort to transform its Exalogic Elastic Cloud into one-stop data centres. Until now, Exalogic boxes were being increasingly used in the form of massive servers or for transaction processing. They were therefore not fulfilling their real purpose, which is to work
14 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
FOSSBYTES
Android-x86 4.4 R1 Linux distro available for download and testing
The team behind Android-x86 recently launched version 4.4 R1 of the port of the Android OS designed specifically for the x86 platform. Android-x86 4.4 KitKat is now available for download and testing on the Linux platform for your PC. Android is actually based on a modified Linux kernel, with many believing it to be a stand alone Linux distribution in its own right. With that said, developers have managed to tweak Android to make it port to the PC for the x86 platforms; that’s what Android-x86 is really all about.
Linux Mint Debian edition to switch from snapshot cycle to Debian stable package base
as cloud-hosting systems. However, with SDN support added, Oracle is aiming to change all this. Oracle plans to directly take on network equipment makers like Cisco, Hewlett-Packard and Brocade with the introduction of Solaris 11.2. Enterprises using Solaris can now simply purchase a handful of Solaris boxes and run their mission-critical clouds. In addition, they can also use bits of OpenStack without acquiring additional hardware.
Canonical launches Ubuntu 12.04.5 LTS
Marking its fifth point release, Canonical has announced that Ubuntu 12.04.5 LTS is available for download and installation. Ubuntu 12.04 LTS was first released back in April 2012. Canonical will continue supporting the LTS until 2017 with regular updates from time to time. Also, this is the first major release for Canonical since the debut of Ubuntu 14.04 LTS earlier this year. The most notable improvement in the new release is the inclusion of an updated kernel (3.13) and X.org stack. Both of these have been traded from Ubuntu 14.04 LTS. The new release is out now for desktop, server, cloud and core products, as well as other flavours of Ubuntu with long-term support. In addition, the new release also comes with ‘security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 12.04 LTS.’ Meanwhile, Kubuntu 12.04.5 LTS, Edubuntu 12.04.5 LTS and Ubuntu Studio 12.04.5 LTS are also available for download and install.
Storm Energy’s SunSniffer charmed by Raspberry Pi!
The team behind Linux Mint has decided to let go of the current snapshot cycle in the Debian edition for the Linux distribution and instead switch over to a Debian stable package base. The current Linux Mint editions are based on Ubuntu and the team is most likely to stick to that for at least a couple of years. The team recently launched the latest iteration of Linux Mint, a.k.a. ‘Qiana’. Both the Cinnamon and Mate versions are now available for download with the KDE and XFCE versions expected to come out soon. Meanwhile, it has been announced that the next three Linux Mint releases would also, in all probability, be based on Ubuntu 14.04 LTS.
The humble Raspberry Pi single board computer is indeed going places, receiving critical acclaim for, well, being downright awesome. The latest to be smitten by it is the German company, Storm Energy, which builds products like SunSniffer, a solar plant monitoring system. The SunSniffer system is designed to monitor photovoltaic (PV) solar power installations of varied sizes. The company has now upgraded the system to a Linuxbased platform running on a Raspberry Pi. In addition to this, the latest SunSniffer version also comes with a custom expansion board and customised Linux OS. The SunSniffer is IP65-rated, and the new Connection Box’s custom Raspberry Pi expansion board comes with five RS-485 ports and eight analogue/digital I/O interfaces to help simultaneously monitor a wide variety of solar inverters (Refusol, Huawei and Kostal, among others). In short, the new system can remotely control solar inverters via a radio ripple control receiver, as against earlier versions where users could only monitor their data. The Raspberry Pi-laden SunSniffer also offers SSL-encryption and optional integrated anti-theft protection.
Italian city of Turin switching to open source technology
In a recent development, the Italian city of Turin is considering ditching all Microsoft products in favour of open source alternatives. The move is directly aimed at cutting government costs, while not compromising on functionality. If at all Turin gets rid of all proprietary software, it will go on to become one of the first Italian ‘open source cities’ and save itself at least a whopping six million Euros. A report suggests that as many as 8,300 computers of the local administration in Turin will soon have Ubuntu under the hood and will be shipped with the Mozilla Firefox
16 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
FOSSBYTES Web browser and OpenOffice—the two joys of the open source world. The local government has argued that a large amount of money is spent on buying licences in case of proprietary software, wasting a lot of the local tax payers’ money. Therefore, a decision to drop Microsoft in favour of cost-effective open source alternatives seems to be a viable option.
LibreOffice coming to Android
LibreOffice needs no introduction. The Document Foundation’s popular open source office suite is widely used by millions of people across the globe. Therefore, news that the suite could soon be launched on Android is something to watch out for. You heard that right! A new report by Tech Republic suggests that the Document Foundation is currently on a rigorous workout to make this happen. However, as things stand, there is still some time before that happens for real. Even as the Document Foundation came out with the first Release Candidate (RC) version of the upcoming LibreOffice 4.2.5 recently (it has been quite consistent in updating its stable version on a timely basis), work is on to make LibreOffice available for Google’s much loved Android platform as well, the report says. The buzz is that developers back home are currently talking about (and working at) getting the file size right, that is, something well below the Google limit. Until they are able to do that, LibreOffice for Android is a distant dream, sadly. However, as and when this happens, LibreOffice would be in direct competition with Google Docs. Since there is a genuine need for Open Document Format (ODF) support in Android, the release might just be what the doctor ordered for many users. This is more of a rumour at the moment, and things will get clearer in time. There is no official word from either Google or the Document Foundation about this, but we will keep you posted on developments. The recent release – the LibreOffice 4.2.5 RC1—meanwhile tries to curb many key bugs that plagued the last 4.2.4 final release. This, in turn, has improved its usability and stability to a significant extent.
RHEL 6.6 beta is released; draws major inspiration from RHEL 7
Just so RHEL 6.x users (who wish to continue with this branch of the distribution for a bit longer) don’t feel left out, Red Hat has launched a beta release of its Red Hat Enterprise Linux 6.6 (RHEL 6.6) platform. Taking much of its inspiration from the recently released RHEL 7, the move is directed towards RHEL 6.x users so that they benefit from new platform features. At the same time, it comes with some real cool features that are quite independent of RHEL 7 and which make 6.6 beta stand out on its own merits. Red Hat offers Application Binary Interface (ABI) compatibility for RHEL for a period of ten years, so technically speaking, it cannot drastically change major elements of an in-production release. Quite simply put, it can’t and won’t change an in-production release in a way that could alter stability or existing compatibility. This would eventually mean that the new release on offer cannot go much against the tide with respect to RHEL 6. Although the feature list for RHEL 6.6 beta ties in closely with the feature list of the major release (6.0), it doesn’t mean RHEL 6.6 beta is simply old wine served in a new bottle. It does manage to introduce some key improvements for RHEL 6.x users. To begin with, RHEL 6.6 beta includes some features that were first introduced with RHEL 7, the most notable being Performance Co-Pilot (PCP). The new beta release will also offer RHEL 6.x users more integrated Remote Direct Memory Access (RDMA) capabilities.
Khronos releases OpenGL NG
The Khronos Group recently announced the release of the latest iteration of OpenGL (the oldest high-level 3D graphics API still in popular use). Although OpenGL 4.5 is a noteworthy release in its own right, the Group’s second major release in the next generation OpenGL initiative is garnering widespread appreciation. While OpenGL 4.5 is what some might call a fairly standard annual OpenGL update, OpenGL NG is a complete rebuild of the OpenGL API, designed with the idea of building an entirely new version of OpenGL. This new version will have a significantly reduced overhead owing to the removal of a lot of abstraction. Also, it will do away with the major inefficiencies of older versions when working at a low level with the bare metal GPU hardware. Being a very high-level API, earlier versions of OpenGL made it hard to efficiently run code on the GPU directly. While this didn’t matter so much earlier, now things have changed. Fuelled by more mature GPUs, developers today tend to ask for graphics APIs that allow them to get much closer to the bare metal. The next generation OpenGL initiative is directed at developers who are looking to improve performance and reduce overhead.
Dropbox’s updated Android App offers improved features
A major update has been announced by Dropbox in connection with its official Android app, and is available at Google Play. This new update carries version number 2.4.3 and comes with a lot of improved features. As the Google Play listing suggests, this new Dropbox version supports inapp previews of Word, PowerPoint and PDF files. A better search experience is also offered in this new version, which enables tracking of recent queries, and suggestions are also displayed. One can also search in specific folders from now onwards.
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 17
Buyers’ Guide
Motherboards
The Lifeline of Your Desktop If you are a gamer, or like to customise your PC and build it from scratch, the motherboard is what you require to link all the important and key components together. Let’s find out how to select the best desktop motherboards.
T
he central processing unit (CPU) can be considered to be the brain of a system or a PC in layman’s language, but it still needs a ‘nervous system’ to be connected with all the other components in your PC. A motherboard plays this role, as all the components are attached to it and to each other with the help of this board. It can be defined as a PCB (printed circuit board) that has the capability of expanding. As the name suggests, a motherboard is believed to be the ‘mother’ of all the components attached in it, including network cards, sound cards, hard drives, TV tuner cards, slots, etc. It holds the most significant sub-systems— the processor along with other important components. A motherboard is found in all electronics devices like TVs, washing machines and other embedded systems. Since it provides the electrical connections through which other components are connected and linked with each other, it needs the most attention. It hosts other devices and subsystems and also contains the central processing unit, unlike the backplane. There are quite a lot of companies that deal with motherboards and Simmtronics is one among the leading players. According to Dr Inderjeet Sabbrawal, chairman, Simmtronics, “Simmtronics has been one of the exclusive manufacturers of motherboards in the hardware industry over the last 20 years. We strongly believe in creativity, innovation and R&D. Currently, we are fulfilling our commitment to provide the latest mainstream motherboards. At Simmtronics, the quality of the motherboards is strictly controlled. At present, the market is not growing.… India still has a varied market for older generation models as well as the latest models of motherboards.”
Factors to consider while buying a motherboard
In a desktop, several essential units and components are attached directly to the motherboard, such as the microprocessor, main memory, etc. Other components, such as the external storage controllers for sound and video display and various peripheral devices, are attached to it through slots, plug-in cards or cables. There are a number of factors to keep in mind while buying a motherboard, and these depend on the specific requirements. Linux is slowly taking over the PC world and, hence, people now look for Linux-supported motherboards. As a result, almost every motherboard now supports Linux. The many factors to keep in mind when buying a Linux-supported motherboard are discussed below.
CPU socket
The central processing unit is the key component of a motherboard and its performance is primarily determined by the kind of processor it is designed to hold. The CPU socket can be defined as an electrical component that connects or attaches to the motherboard and is designed to house a microprocessor. So, when you’re buying a motherboard, you should look for a CPU socket that is compatible with the CPU you have planned to use. Most of the time, motherboards use one of the following five sockets -- LGA1155, LGA2011, AM3, AM3+ and FM1. Some of the sockets are backward compatible and some of the chips are interchangeable. Once you opt for a motherboard, you will be limited to using the processors that offer similar specifications.
Form factor
A motherboard’s capabilities are broadly determined by its shape, size and how much it can be expanded – these aspects are known as form factors. Although there is no fixed design or form for motherboards, and they are available in many variations, two form factors have always been the favourites -- ATX and microATX. The ATX motherboard measures around 305cm x 23cm (12 inch x 9 inch) and offers the highest number of expansion slots, RAM bays and data connectors. MicroATX motherboards measure 24.38cm x 24.38cm (9.6 x 9.6 inch) and have fewer expansion slots, RAM bays and other components. The form factor of a motherboard can be decided according to what purpose the motherboard is expected to serve.
RAM bays
Random access memory (RAM) is considered the most important workspace in a motherboard, where data is processed even after being removed from the hard disk drive or solid state drive. The efficiency of your PC directly depends on the speed and size of your RAM. The more space you have on your RAM, the more efficient your computing will be. But it’s no use having a RAM with greater efficiency than your motherboard can support, as that will be just a waste of the extra potential. Neither can you have RAM with lesser efficiency than the motherboard, as then the PC will not work well due to the bottlenecks caused by mismatched capabilities. Choosing the motherboard which supports just the right RAM is vital. Apart from these factors, there are many others to consider before selecting a motherboard. These include the audio system, display, LAN support, expansion capabilities and peripheral interfaces.
18 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Buyers’ Guide
A few desktop motherboards with the latest chipsets Intel: DZ87KLT-75K motherboard Supported CPU: Fourth generation Intel Core i7 processor, Intel Core i5 processor and other Intel processors in the LGA1150 package Memory supported: 32GB of system memory, dual channel DDR3 2400+ MHz, DDR3 1600/1333 MHz Form factor: ATX form factor
Asus: Z87-K motherboard
Supported CPU: Fourth generation Intel Core i7 processor, Intel Core i5 processor and other Intel processors Memory supported: Dual channel memory architecture supports Intel XMP Form factor: ATX form factor
Simmtronics SIMM-INT H61 (V3) motherboard CPU supported: Intel Core2nd and Core3rd Generation i7/i5/i3/Pentium/Celeron Main memory supported: Dual channel DDR3 1333/1066 BIOS: 1×32MB Flash ROM Connectors: 1×4-pin ATX 12V power connector Chipset: Intel H61 (B3 Version)
Gigabyte Technology: GA-Z87X-OC motherboard CPU supported: Fourth generation Intel Core i7 processor, Intel Core i5 processor and other Intel processors Memory supported: Supports DDR3 3000 Form factor: MicroATX
By: Manvi Saxena The author is a part of the editorial team at EFY.
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 19
CODE
SPORT
Sandya Mannarswamy
In this month’s column, we continue our discussion on natural language processing.
F
or the past few months, we have been discussing information retrieval and natural language processing, as well as the algorithms associated with them. This month, we continue our discussion on natural language processing (NLP) and look at how NLP can be applied in the field of software engineering. Given one or many text documents, NLP techniques can be applied to extract information from the text documents. The software engineering (SE) lifecycle gives rise to a number of textual documents, to which NLP can be applied. So what are the software artifacts that arise in SE? During the requirements phase, a requirements document is an important textual artifact. This specifies the expected behaviour of the software product being designed, in terms of its functionality, user interface, performance, etc. It is important that the requirements being specified are clear and unambiguous, since during product delivery, customers would like to confirm that the delivered product meets all their specified requirements. Having vague ambiguous requirements can hamper requirement verification. So text analysis techniques can be applied to the requirements document to determine whether there are any ambiguous or vague statements. For instance, consider a statement like, “Servicing of user requests should be fast, and request waiting time should be low.” This statement is ambiguous since it is not clear what exactly the customer’s expectations of ‘fast service’ or ‘low waiting time’ may be. NLP tools can detect such ambiguous requirements. It is also important that there are no logical inconsistencies in the requirements. For instance, a requirement that “Login names should allow a maximum of 16 characters,” and that “The login database will have a field for login names which is 8 characters wide,” conflict with each other. While the user interface allows up to a maximum of 16 characters, the backend login database will support fewer characters, which is inconsistent with the earlier requirement. Though currently such inconsistent requirements are flagged by human inspection, it is possible to design text analysis tools to detect them.
20 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
The software design phase also produces a number of SE artifacts such as the design document, design models in the form of UML documents, etc, which also can be mined for information. Design documents can be analysed to generate automatic test cases in order to test the final product. During the development and maintenance phases, a number of textual artifacts are generated. Source code itself can be considered as a textual document. Apart from source code, source code control system logs such as SVN/ GIT logs, Bugzilla defect reports, developers’ mailing lists, field reports, crash reports, etc, are the various SE artifacts to which text mining can be applied. Various types of text analysis techniques can be applied to SE artifacts. One popular method is duplicate or similar document detection. This technique can be applied to find out duplicate bug reports in bug tracking systems. A variation of this technique can be applied to code clones and copy-and-paste snippets. Automatic summarisation is another popular technique in NLP. These techniques try to generate a summary of a given document by looking for the key points contained in it. There are two approaches to automatic summarisation. One is known as ‘extractive summarisation’, using which key phrases and sentences in the given document are extracted and put back together to provide a summary of the document. The other is the ‘abstractive summarisation’ technique, which is used to build an internal semantic representation of the given document, from which key concepts are extracted, and a summary generated using natural language understanding. The abstractive summarisation technique is close to how humans would summarise a given document. Typically, we would proceed by building a knowledge representation of the document in our minds and then using our own words to provide a summary of the key concepts. Abstractive summarisation is obviously more complex than extractive summarisation, but yields better summaries. Coming to SE artifacts, automatic summarisation techniques can be applied to generate large bug reports. They can also be applied to generate high level comments
Guest Column of methods contained in source code. In this case, each method can be treated as an independent document and the high level comment associated with that method or function is nothing but a short summary of the method. Another popular text analysis technique involves the use of language models, which enables predicting what the next word would be in a particular sentence. This technique is typically used in optical character recognition (OCR) generated documents, where due to OCR errors, the next word is not visible or gets lost and hence the tool needs to make a best case estimate of the word that may appear there. A similar need also arises in the case of speech recognition systems. In case of poor speech quality, when a sentence is being transcribed by the speech recognition tool, a particular word may not be clear or could get lost in transmission. In such a case, the tool needs to predict what the missing word is and add it automatically. Language modelling techniques can also be applied in intelligent development environments (IDE) to provide ‘auto-completion’ suggestions to the developers. Note that in this case, the source code itself is being treated as text and is analysed. Classifying a set of documents into specific categories is another well-known text analysis technique. Consider a large number of news articles that need to be categorised based on topics or their genre, such as politics, business, sports, etc. A number of well-known text analysis techniques are available for document classification. Document classification techniques can also be applied to defect reports in SE to classify the category to which the defect belongs. For instance, security related bug reports need to be prioritised. While people currently inspect bug reports, or search for specific key words in a bug category field in Bugzilla reports in order to classify bug reports, more robust and automated techniques are needed to classify defect reports in large scale open source projects. Text analysis techniques for document classification can be employed in such cases. Another important need in the SE lifecycle is to trace source code to its origin in the requirements document. If a feature ‘X’ is present in the source code, what is the requirement ‘Y’ in the requirements document which necessitated the development of this feature? This is known as traceability of source code to requirements. As source code evolves over time, maintaining traceability links automatically through tools is essential to scale out large software projects. Text analysis techniques can be employed to connect a particular requirement from the requirements document to a feature in the source code and hence automatically generate the traceability links. We have now covered automatic summarisation techniques for generating summaries of bug reports and generating header level comments for methods. Another possible use for such techniques in SE artifacts is to enable the automatic generation of user documentation associated with that software project. A number of text mining techniques have been employed to mine ‘stack overflow’ mailing lists to generate automatic user documentation or FAQ documents for different software projects. Regarding the identification of inconsistencies in the requirements document, inconsistency detection techniques can be applied to source code comments also. It is a general expectation that source code comments express the programmer’s
CodeSport
intent. Hence, the code written by the developer and the comment associated with that piece of code should be consistent with each other. Consider the simple code sample shown below: /* linux/drivers/scsi/in2000.c: */ /* caller must hold instance lock */ Static int reset_hardware(…) { …. } static int in2000_bus_reset(…) { ….. reset_hardware(); … }
In the above code snippet, the developer has expressed the intention that ‘instance_lock’ must be held before the function ‘reset_hardware’ is called as a code comment. However, in the actual source code, the lock is not acquired before the call to ‘reset_hardware’ is made. This is a logical inconsistency, which can arise either due to: (a) comments being outdated with respect to the source code; or (b) incorrect code. Hence, flagging such errors is useful to the developer who can fix either the comment or the code, depending on which is incorrect.
My ‘must-read book’ for this month
This month’s book suggestion comes from one of our readers, Sharada, and her recommendation is very appropriate to the current column. She recommends an excellent resource for natural language processing—a book called, ‘Speech and Language Processing: An Introduction to Natural Language Processing’ by Jurafsky and Martin. The book describes different algorithms for NLP techniques and can be used as an introduction to the subject. Thank you, Sharada, for your valuable recommendation. If you have a favourite programming book or article that you think is a must-read for every programmer, please do send me a note with the book’s name, and a short write-up on why you think it is useful so I can mention it in the column. This would help many readers who want to improve their software skills. If you have any favourite programming questions/software topics that you would like to discuss on this forum, please send them to me, along with your solutions and feedback, at sandyasm_AT_yahoo_DOT_com. Till we meet again next month, happy programming!
By: Sandya Mannarswamy The author is an expert in systems software and is currently working with Hewlett Packard India Ltd. Her interests include compilers, multi-core and storage systems. If you are preparing for systems software interviews, you may find it useful to visit Sandya's LinkedIn group ‘Computer Science Interview Training India’ at http://www. linkedin.com/groups?home=HYPERLINK "http://www.linkedin.com/ groups?home=&gid=2339182" www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 21
Exploring Software
Anil Seth
Guest Column
Exploring Big Data on a Desktop Getting Started with Hadoop Hadoop is a large scale, open source storage and processing framework for data sets. In this article, the author sets up Hadoop on a single node, takes the reader through testing it, and later tests it on multiple nodes.
F
edora 20 makes it easy to install Hadoop. Version 2.2 is packaged and available in the standard repositories. It will place the configuration files in /etc/hadoop, with reasonable defaults so that you can get started easily. As you may expect, managing the various Hadoop services is integrated with systemd.
You can find out the hdfs directories created as follows. The command may look complex, but you are running the ‘hadoop fs’ command in a shell as Hadoop's internal user, hdfs:
First, start an instance, with name h-mstr, in OpenStack using a Fedora Cloud image (http://fedoraproject. org/get-fedora#clouds). You may get an IP like 192.168.32.2. You will need to choose at least the m1.small flavour, i.e., 2GB RAM and 20GB disk. Add an entry in /etc/hosts for convenience:
Now, install and test the Hadoop packages on the virtual machine by following the article, http://fedoraproject.org/ wiki/Changes/Hadoop: $ ssh fedora@h-mstr $ sudo yum install hadoop-common hadoop-common-native hadoophdfs \ hadoop-mapreduce hadoop-mapreduce-examples hadoop-yarn
It will download over 200MB of packages and take about 500MB of disk space. Create an entry in the /etc/hosts file for h-mstr using the name in /etc/hostname, e.g.: 192.168.32.2
Create a directory with the right permissions for the user, fedora, to be able to run the test scripts: $ sudo -mkdir $ sudo -chown
Disable the firewall and iptables and run a mapreduce example. You can monitor the progress at http://h-mstr:8088/. Figure 1 shows an example running on three nodes. The first test is to calculate pi using 10 maps and 1,000,000 samples. It took about 90 seconds to estimate the value of pi to be 3.1415844.
h-mstr h-mstr.novalocal
Now, you can test the installation. First, run a script to create the needed hdfs directories: $ sudo hdfs-create-dirs
Then, start the Hadoop services using systemctl:
$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduceexamples.jar pi 10 1000000
In the next test, you create 10 million records of 100 bytes each, that is, 1GB of data (~1 min). Then, sort it (~8 min) and, finally, verify it (~1 min). You may want to clean up the directories created in the process:
22 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Exploring Software
Guest Column
$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduceexamples.jar teragen 10000000 gendata $ hadoop jar /usr/share/java/hadoop/hadoop-mapreduceexamples.jar terasort gendata sortdata $ hadoop jar /usr/share/java/hadoop/hadoop-mapreduceexamples.jar teravalidate sortdata reportdata $ hadoop fs -rm -r gendata sortdata reportdata
Stop the Hadoop services before creating and working with multiple data nodes, and clean up the data directories: $ sudo systemctl stop hadoop-namenode hadoop-datanode \ hadoop-nodemanager hadoop-resourcemanager $ sudo rm -rf /var/cache/hadoop-hdfs/hdfs/dfs/*
Figure 1: OpenStack-Hadoop
Testing with multiple nodes
The following steps simplify creation of multiple instances: Generate ssh keys for password-less log in from any node to any other node. $ ssh-keygen $ cat .ssh/id_rsa.pub >> .ssh/authorized_keys
In /etc/ssh/ssh_config, add the following to ensure that ssh does not prompt for authenticating a new host the first time you try to log in.
StrictHostKeyChecking no
In /etc/hosts, add entries for slave nodes yet to be created:
Delete the following lines from hdfs-site.xml: dfs.safemode.extension0dfs.safemode.min.datanodes1
Edit or create, if needed, slaves with the host names of the data nodes:
Now, modify the configuration files located in /etc/hadoop. Edit core-site.xml and modify the value of fs.default.name by replacing localhost by h-mstr:
[fedora@h-mstr hadoop]$ cat slaves h-slv1 h-slv2
fs.default.namehdfs://h-mstr:8020
Add the following lines to yarn-site.xml so that multiple node managers can be run: yarn.resourcemanager.hostnameh-mstr
Edit mapred-site.xml and modify the value of mapred.job. tracker by replacing localhost by h-mstr: mapred.job.trackerh-mstr:8021
Now, create a snapshot, Hadoop-Base. Its creation will take time. It may not give you an indication of an error if it runs out of disk space!
24 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Guest Column Exploring Software
Launch instances h-slv1 and h-slv2 serially using Hadoop-Base as the instance boot source. Launching of the first instance from a snapshot is pretty slow. In case the IP addresses are not the same as your guess in /etc/hosts, edit / etc/hosts on each of the three nodes to the correct value. For your convenience, you may want to make entries for h-slv1 and h-slv2 on the desktop /etc/hosts file as well. The following commands should be run from Fedora on h-mstr. Reformat the namenode to make sure that the single node tests are not causing any unexpected issues: $ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop namenode -format" Start the hadoop services on h-mstr. $ sudo systemctl start hadoop-namenode hadoop-datanode hadoop-nodemanager hadoop-resourcemanager
You can run the same tests again. Although you are using three nodes, the improvement in the performance compared to the single node is not expected to be noticeable as the nodes are running on a single desktop. The pi example took about one minute on the three nodes, compared to the 90 seconds taken earlier. Terasort took 7 minutes instead of 8. Note: I used an AMD Phenom II X4 965 with 16GB RAM to arrive at the timings. All virtual machines and their data were on a single physical disk. Both OpenStack and Mapreduce are a collection of interrelated services working together. Diagnosing problems, especially in the beginning, is tough as each service has its own log files. It takes a while to get used to realising where to look. However, once these are working, it is incredible how easy they make distributed processing!
Start the datanode and yarn services on the slave nodes: $ ssh -t fedora@h-slv1 sudo systemctl start hadoop-datanode hadoop-nodemanager $ ssh -t fedora@h-slv2 sudo systemctl start hadoop-datanode hadoop-nodemanager
By: Dr Anil Seth
Create the hdfs directories and a directory for user fedora as on a single node:
The author has earned the right to do what interests him. You can find him online at http://sethanil.com, http://sethanil. blogspot.com, and reach him via email at [email protected]
$ sudo hdfs-create-dirs
OSFY Magazine Attractions During 2014-15 Month
Theme
Featured List
buyers’ guide
March 2014
Network monitoring
Security
-------------------
April 2014
Android Special
Anti Virus
Wifi Hotspot Devices
May 2014
Backup and Data Storage
Certification
External Storage
June 2014
Open Source on Windows
Mobile Apps
UTMs fo SMEs
July 2014
Firewall and Network security
Web Hosting Solutions Providers
MFD Printers for SMEs
August 2014
Kernel Development
Big Data solution Providers
SSDs for Servers
September 2014
Open Source for Start-ups
Cloud
Android Devices
October 2014
Mobile App Development
Training on Programming Languages
Projectors
November 2014
Cloud Special
Virtualisation Solutions Providers
Network Switches and Routers
December 2014
Web Development
Leading Ecommerce Sites
AV Conferencing
January 2015
Programming Languages
IT Consultancy Service Providers
Laser Printers for SMEs
February 2015
Top 10 of Everything on Open Source
Storage Solutions Providers
Wireless Routers
www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 25
Developers
Insight
Improve Python Code by Using a Profiler
The line_profiler gives a line-by-line analysis of the Python code and can thus identify bottlenecks that slow down the execution of a program. By making modifications to the code based on the results of this profiler, developers can improve the code and refine the program.
H
ave you ever wondered which module is slowing down your Python program and how to optimise it? Well, there are ‘profilers’ that can come to your rescue. Profiling, in simple terms, is the analysis of a program to measure the memory used by a certain module, frequency and duration of function calls, and the time complexity of the same. Such profiling tools are termed profilers. This article will discuss the line_profiler for Python.
Installation
Installing pre-requisites: Before installing line_profiler make sure you install these pre-requisites: a) For Ubuntu/Debian-based systems (recent versions):
b) For Fedora systems: sudo yum install -y mercurial python python3 python-pip
Note: 1. I have used the ‘–y’ argument to automatically install the packages after being tracked by the yum installer. 2. Mac users can use Homebrew to install these packages. Cython is a pre-requisite because the source releases require a C compiler. If the Cython package is not found or is too old in your current Linux distribution version, install it by running the following command in a terminal: sudo pip install Cython
sudo apt-get install mercurial python python3 python-pip python3-pip Cython Cython3 26 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Note: Mac OS X users can install Cython using pip.
Developers
Insight
Cloning line_profiler: Let us begin by cloning the line_profiler source code from bitbucket. To do so, run the following command in a terminal: hg clone https://bitbucket.org/ robertkern/line_profiler
The above repository is the official line_profiler repository, with support for python 2.4 - 2.7.x. For python 3.x support, we will Figure 1: line_profiler output need to clone a fork of the official source code that provides python 3.x compatibility for line_profiler and kernprof. hg clonehttps://bitbucket.org/kmike/line_profiler
Installing line_profiler: Navigate to the cloned repository by running the following command in a terminal: cd line_profiler
To build and install line_profiler in your system, run the following command: a) For official source (supported by python 2.4 - 2.7.x): sudo python setup.py install
b) For forked source (supported by python 3.x): sudo python3 setup.py install
Using line_profiler
Note: I have combined both the commands in a single line separated by a semicolon ‘;’ to immediately show the profiled results. You can run the two commands separately or run kernprof.py with ‘–v’ argument to view the formatted result in the terminal. kernprof.py -l compiles the profiled function in example.py line by line; hence, the argument -l stores the result in a binary file with a .lprof extension. (Here, example.py.lprof) We then run line_profiler on this binary file by using the ‘-m line_profiler’ argument. Here ‘-m’ is followed by the module name, i.e., line_profiler. Case study: We will use the Gnome-Music source code for our case study. There is a module named _connect_view in the view.py file, which handles the different views (artists, albums, playlists, etc) within the music player. This module is reportedly running slow because a variable is initialised each time the view is changed. By profiling the source code, we get the following result:
Adding profiler to your code: Since line_profiler has been designed to be used as a decorator, we need to decorate the specified function using a ‘@profile’ decorator. We can do so by adding an extra line before a function, as follows:
Wrote profile results to gnome-music.lprof Timer unit: 1e-06 s
@profile def foo(bar): .....
File: ./gnomemusic/view.py Function: _connect_view at line 211 Total time: 0.000627 s
Running line_profiler: Once the ‘slow’ module is profiled, the next step is to run the line_profiler, which will give line-by-line computation of the code within the profiled function. Open a terminal, navigate to the folder where the ‘.py’ file is located and type the following command:
Line # Hits Time Per Hit % Time Line Contents ============================================================= 211 @profile 212 def _connect_view(self): 205 51.2 32.7 vadjustment = 213 4
In the above code, line no 213, vadjustment = self.view.get_ vadjustment(), is called too many times, which makes the process slower than expected. After caching (initialising) it in the init function, we get the following result tested under the same condition. You can see that there is a significant improvement in the results (Figure 2). Figure 2: Optimised code line_profiler output to the total amount of recorded time spent in the function. Line content: It displays the actual source code.
Wrote profile results to gnome-music.lprof Timer unit: 1e-06 s File: ./gnomemusic/view.py Function: _connect_view at line 211 Total time: 0.000466 s
Note: If you make changes in the source code you need to run the kernprof and line_profiler again in order to profile the updated code and get the latest results.
Line # Hits Time Per Hit % Time Line Contents ============================================================ @profile 211 def _connect_view(self): 212 213 4 86 21.5 18.5 self._adjustmentValueId = vadjustment.connect( 214 4 161 40.2 34.5 'value-changed', 215 4 219 54.8 47.0 self._on_scrolled_win_change)
Understanding the output
Here is an analysis of the output shown in the above snippet. Function: Displays the name of the function that is profiled and its line number. Line#: The line number of the code in the respective file. Hits: The number of times the code in the corresponding line was executed. Time: Total amount of time spent in executing the line in ‘Timer unit’ (i.e., 1e-06s here). This may vary from system to system. Per hit: The average amount of time spent in executing the line once in ‘Timer unit’. % time: The percentage of time spent on a line with respect
Advantages
Line_profiler helps us to profile our code line-by-line, giving the number of hits, time taken for each hit and %time. This helps us to understand which part of our code is running slow. It also helps in testing large projects and the time spent by modules to execute a particular function. Using this data, we can commit changes and improve our code to build faster and better programs. References [1] http://pythonhosted.org/line_profiler/ [2] http://jacksonisaac.wordpress.com/2013/09/08/usingline_profiler-with-python/ [3] https://pypi.python.org/pypi/line_profiler [4] https://bitbucket.org/robertkern/line_profiler [5] https://bitbucket.org/kmike/line_profiler
By: Jackson Isaac The author is an active open source contributor to projects like gnome-music, Mozilla Firefox and Mozillians. Follow him on jacksonisaac.wordpress.com or email him at [email protected]
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 29
Developers
Insight
Understanding the Document Object Model (DOM) in Mozilla
This article is an introduction to the DOM programming interface and the DOM inspector, which is a tool that can be used to inspect and edit the live DOM of any Web document or XUL application.
T
he Document Object Model (DOM) is a programming interface for HTML and XML documents. It provides a structured representation of a document and it defines a way that the structure can be accessed from the programs so that they can change the document structure, style and content. The DOM provides a representation of the document as a structured group of nodes and objects that have properties and methods. Essentially, it connects Web pages to scripts or programming languages. A Web page is a document that can either be displayed in the browser window or as an HTML source that is in the same document. The DOM provides another way to represent, store and manipulate that same document. In simple terms, we can say that the DOM is a fully object-oriented representation of a Web page, which can be modified by any scripting language. The W3C DOM standard forms the basis of the DOM implementation in most modern browsers. Many browsers offer extensions beyond the W3C standard. All the properties, methods and events available for manipulating and creating the Web pages are organised into
objects. For example, the document object that represents the document itself, the tableObject that implements the special HTMLTableElement DOM interface to access the HTML tables, and so forth.
Why is DOM important?
‘Dynamic HTML’ (DHTML) is a term used by some vendors to describe the combination of HTML, style sheets and scripts that allow documents to be animated. The W3C DOM working group is aiming to make sure interoperable and language-neutral solutions are agreed upon. As Mozilla claims the title of ‘Web Application Platform’, support for the DOM is one of the most requested features; in fact, it is a necessity if Mozilla wants to be a viable alternative to the other browsers. The user interface of Mozilla (also Firefox and Thunderbird) is built using XUL and the DOM to manipulate its own user interface.
How do I access the DOM?
You don’t have to do anything special to begin using the
30 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Insight
Developers
Figure 1: DOM inspector Figure 2: Inspecting content documents
DOM. Different browsers have different implementations of it, which exhibit varying degrees of conformity to the actual DOM standard but every browser uses some DOM to make Web pages accessible to the script. When you create a script, whether it’s inline in a script element or included in the Web page by means of a script loading instruction, you can immediately begin using the API for the document or window elements. This is to manipulate the document itself or to get at the children of that document, which are the various elements in the Web page. Your DOM programming may be something as simple as the following, which displays an alert message by using the alert( ) function from a window object or it may use more sophisticated DOM methods to actually create them, as in the longer examples that follow:
Aside from the script element in which JavaScript is defined, this JavaScript sets a function to run when the document is loaded. This function creates a new element H1, adds text to that element, and then adds H1 to the tree for this document, as shown below: <script> // run this function when the document is loaded window.onload = function() { // create a couple of elements // in an otherwise empty HTML page heading = document.createElement(“h1”); heading_text = document.createTextNode(“Big Head!”); heading.appendChild(heading_text); document.body.appendChild(heading); }
DOM interfaces
representing the HTMLFormElement gets its name property from the HTMLFormElement interface but its className property from the HTMLElement interface. In both cases, the property you want is simply in the form object.
Interfaces and objects
Many objects borrow from several different interfaces. The table object, for example, implements a specialised HTML table element interface, which includes such methods as createCaption and insertRow. Since an HTML element is also, as far as the DOM is concerned, a node in the tree of nodes that makes up the object model for a Web page or an XML page, the table element also implements the more basic node interface, from which the element derives. When you get a reference to a table object, as in the following example, you routinely use all three of these interfaces interchangeably on the object, perhaps unknowingly: var table = document.getElementById (“table”); var tableAttrs = table.attributes; // Node/Element interface for (var i = 0; i < tableAttrs.length; i++) { // HTMLTableElement interface: border attribute if(tableAttrs[i].nodeName.toLowerCase() == “border”) table.border = “1”; } // HTMLTableElement interface: summary attribute table.summary = “note: increased border” ;
Core interfaces in the DOM
These interfaces just give you an idea about the actual things that you can use to manipulate the DOM hierarchy. The object
These are some of the important and most commonly used interfaces in the DOM. These common APIs are used in the longer examples of DOM. You will often see the following APIs, which are types of methods and properties, when you use DOM. The interfaces of document and window objects are generally used most often in DOM programming. In simple terms, the window object represents something like the browser, and the document object is the root of the document itself. The element inherits from the generic node interface and, together, these two interfaces provide many of the methods and properties you use on individual elements. These elements may also have specific interfaces for dealing with the kind of data those elements hold, as in the table object example.
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 31
Developers
Insight
Figure 3: Inspecting Chrome documents
Figure 4: Inspecting arbitrary URLs
easily accessed from scripts.
An introduction to the DOM inspector
Figure 5: Inspecting a Web page
The following are a few common APIs in XML and Web page scripting that show the use of DOM: • document.getElementById (id) • element.getElementsByTagName (name) • document.createElement (name) • parentNode.appendChild (node) • element.innerHTML • element.style.left • element.setAttribute • element.getAttribute • element.addEventListener • window.content • window.onload • window.dump • window.scrollTo
Testing the DOM API
Here, you will be provided samples for every interface that you can use in Web development. In some cases, the samples are complete HTML pages, with the DOM access in a <script> element, the interface (e.g., buttons) necessary to fire up the script in a form, and the HTML elements upon which the DOM operates listed as well. When this is the case, you can cut and paste the example into a new HTML document, save it, and run the example from the browser. There are some cases, however, when the examples are more concise. To run examples that only demonstrate the basic relationship of the interface to the HTML elements, you may want to set up a test page in which interfaces can be
The DOM inspector is a Mozilla extension that you can access from the Tools -> Web Development menu in SeaMonkey, or by selecting the DOM inspector menu item from the Tools menu in Firefox and Thunderbird or by using Ctrl/Cmd+Shift+I in either application. The DOM inspector is a ‘standalone’ extension; it supports all toolkit applications, and it’s possible to embed it in your own XULRunner app. The DOM inspector can serve as a sanity check to verify the state of the DOM, or it can be used to manipulate the DOM manually, if desired. When you first start the DOM inspector, you are presented with a two-pane application window that looks a little like the main Mozilla browser. Like the browser, the DOM inspector includes an address bar and some of the same menus. In SeaMonkey, additional global menus are available.
Using the DOM inspector
Once you’ve opened the document for the page you are interested in Chrome, you’ll see that it loads the DOM nodes viewer in the document pane and the DOM node viewer in the object pane. In the DOM nodes viewer, there should be a structured, hierarchical view of the DOM. By clicking around in the document pane, you’ll see that the viewers are linked; whenever you select a new node from the DOM nodes viewer, the DOM node viewer is automatically updated to reflect the information for that node. Linked viewers are the first major aspect to understand when learning how to use the DOM inspector.
Inspecting a document
When the DOM inspector opens, it may or may not load an associated document, depending on the host application. If it doesn’t automatically load a document or loads a document other than the one you’d like to inspect, you can select the desired document in a few different ways.
32 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Insight
Developers
Figure 6: Finding app content
Figure 7: Search on Click
There are three ways of inspecting any document, which are described below. Inspecting content documents: The Inspect Content Document menu popup can be accessed from the File menu, and it will list the currently loaded content documents. In the Firefox and SeaMonkey browsers, these will be the Web pages you have opened in tabs. For Thunderbird and SeaMonkey Mail and News, any messages you’re viewing will be listed here. Inspecting Chrome documents: The Inspect Chrome Document menu popup can be accessed from the File menu, and it will contain the list of currently loaded Chrome windows and sub-documents. A browser window and the DOM inspector are likely to already be open and displayed in this list. The DOM inspector keeps track of all the windows that are open, so to inspect the DOM of a particular window in the DOM inspector, simply access that window as you would normally do and then choose its title from this dynamically updated menu list. Inspecting arbitrary URLs: We can also inspect the DOM of arbitrary URLs by using the Inspect a URL menu item in the File menu, or by just entering a URL into the DOM inspector’s address bar and clicking Inspect or pressing Enter. We should not use this approach to inspect Chrome documents, but instead ensure that the Chrome document loads normally, and use the Inspect Chrome Document menu popup to inspect the document. When you inspect a Web page by this method, a browser pane at the bottom of the DOM inspector window will open up, displaying the Web page. This allows you to use the DOM inspector without having to use a separate browser window, or without embedding a browser in your application at all. If you find that the browser pane takes up too much space, you may close it, but you will not be able to visually observe any of the consequences of your actions.
of the DOM inspector to find and inspect the nodes you are interested in. One of the biggest and most immediate advantages that this brings to your Web and application development is that it makes it possible to find the mark-up and the nodes in which the interesting parts of a page or a piece of the user interface are defined. One common use of the DOM inspector is to find the name and location of a particular icon being used in the
DOM inspector viewers
You can use the DOM nodes viewer in the document pane
EMBEDDED SOFTWARE
T DEVELOPMENPS COURSES AND WORKSHO
Embedded RTOS -ARCHITECTURE, INTERNALS AND PROGRAMMING - ON ARM PLATFORM FACULTY : Babu Krishnamurthy (Visiting Faculty, CDAC/ACTS - with 18 years of Industry and Faculty Experience) AUDIENCE : BE/BTECH Students, PG Diploma Students, ME/MTECH Students and Embedded / sw Engineers DATES : 20-09-2014 and 21-09-2014 (2 Days Program) VENUE: School of Embedded Software Development, M.V. Creators' Wing, 3rd Floor, #218, Sunshine Complex, Kammanahalli, 4th Main, 2nd Block, HRBR Layout, Kalyan Nagar, Bangalore - 560043. (Opposite to HDFC Bank, Next to FoodWorld and near JalaVayu Vihar ) Email : Phone : SMS :
[email protected] 080-41207855 +91-9845365845 ( leave a message and we will call you back )
UPCOMING COURSES : • RTOS - BSP AND DRIVER DEVELOPMENT, REAL-TIME LINUX DEVELOPMENT • LINUX INTERNALS AND DEVICE DRIVERS - FUNDAMENTALS AND • LINUX INTERNALS AND DEVICE DRIVERS - ADVANCED
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 33
Developers
Insight
user interface, which is not an easy task otherwise. If you’re inspecting a Chrome document, as you select nodes in the DOM nodes viewer, the rendered versions of those nodes are highlighted in the user interface itself. Note that there are bugs that prevent the flasher from the DOM inspector APIs that are working currently on certain platforms. If you inspect the main browser window, for example, and select nodes in the DOM nodes viewer, you will see the various parts of the browser interface being highlighted with a blinking red border. You can traverse the structure and go from the topmost parts of the DOM tree to lower level nodes, such as the ‘search-go-button’ icon that lets users perform a query using the selected search engine. The list of viewers available from the viewer menu gives you some idea about how extensive the DOM inspector’s capabilities are. The following descriptions provide an overview of these viewers’ capabilities: 1. The DOM nodes viewer shows attributes of nodes that can take them, or the text content of text nodes, comments and processing instructions. The attributes and text contents may also be edited. 2. The Box Model viewer gives various metrics about XUL and HTML elements, including placement and size. 3. The XBL Bindings viewer lists the XBL bindings attached to elements. If a binding extends to another binding, the binding menu list will list them in descending order to “root” binding. 4. The CSS Rules viewer shows the CSS rules that are applied to the node. Alternatively, when used in conjunction with the Style Sheets viewer, the CSS Rules viewer lists all recognised rules from that style sheet. Properties may also be edited. Rules applying to pseudoelements do not appear. 5. This viewer gives a hierarchical tree of the object pane’s subject. The JavaScript Object viewer also allows JavaScript to be evaluated by selecting the appropriate menu item in the context menu. Three basic actions of DOM node viewers are described below.
Selecting elements by clicking: A powerful interactive feature of the DOM inspector is that when you have it open and have enabled this functionality by choosing Edit > Select Element by Click (or by clicking the little magnifying glass icon in the upper left portion of the DOM Inspector application), you can click anywhere in a loaded Web page or the Inspect Chrome document. The element you click will be shown in the document pane in the DOM nodes viewer and the information will be displayed in the object pane. Searching for nodes in the DOM: Another way to inspect the DOM is to search for particular elements you’re interested in by ID, class or attribute. When you select Edit > Find Nodes... or press Ctrl + F, the DOM inspector displays a Find dialogue that lets you find elements in various ways, and that gives you incremental searching by way of the shortcut key. Updating the DOM, dynamically: Another feature worth mentioning is the ability the DOM inspector gives you to dynamically update information reflected in the DOM about Web pages, the user interface and other elements. Note that when the DOM inspector displays information about a particular node or sub-tree, it presents individual nodes and their values in an active list. You can perform actions on the individual items in this list from the Context menu and the Edit menu, both of which contain menu items that allow you to edit the values of those attributes. This interactivity allows you to shrink and grow the element size, change icons, and do other layout-tweaking updates—all without actually changing the DOM as it is defined in the file on disk. References [1] https://developer.mozilla.org/en-US/docs/Web/API/ Document_Object_Model [2] https://developer.mozilla.org/en/docs/Web/API/Document
By: Anup Allamsetty The author is an active contributor to Mozilla and GNOME. He blogs at https://anup07.wordpress.com/ and you can email him at [email protected].
EB Times
An EFY Group publication
• Electronics • Trade Channel • Updates
is Becoming Regional Get East, West, North & South Editions at you doorstep. Write to us at [email protected] and get EB Times regularly This monthly B2B Newspaper is a resource for traders, distributors, dealers, and those who head channel business, as it aims to give an impetus to channel sales
34 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Let's Try
Developers
Experimenting with More Functions in Haskell
We continue our exploration of the open source, advanced and purely functional programming language, Haskell. In the third article in the series, we will focus on more Haskell functions, conditional constructs and their usage.
function in Haskell has the function name followed by arguments. An infix operator function has operands on either side of it. A simple infix add operation is shown below:
Functions can also be partially applied in Haskell. A function that subtracts ten from a given number can be defined as:
If you wish to convert an infix function to a prefix function, it must be enclosed within parentheses:
diffTen :: Integer -> Integer diffTen = (10 -)
*Main> (+) 3 5 8
Similarly, if you wish to convert a prefix function into an infix function, you must enclose the function name within backquotes(`). The elem function takes an element and a list, and returns true if the element is a member of the list:
Loading the file in GHCi and passing three as an argument yields: *Main> diffTen 3 7
Haskell exhibits polymorphism. A type variable in a function is said to be polymorphic if it can take any type. Consider the last function that returns the last element in an array. Its type signature is: www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 35
Developers
Let's Try
*Main> :t last last :: [a] -> a
Similarly, the if and else constructs must be neatly aligned. The else statement is mandatory in Haskell. For example:
The ‘a’ in the above snippet refers to a type variable and can represent any type. Thus, the last function can operate on a list of integers or characters (string):
sign :: Integer -> String sign x = if x > 0 then "Positive" else if x < 0 then "Negative" else "Zero"
*Main> last [1, 2, 3, 4, 5] 5 *Main> last "Hello, World" 'd'
You can use a where clause for local definitions inside a function, as shown in the following example, to compute the area of a circle: areaOfCircle :: Float -> Float areaOfCircle radius = pi * radius * radius where pi = 3.1415
Loading it in GHCi and computing the area for radius 1 gives: *Main> areaOfCircle 1 3.1415
You can also use the let expression with the in statement to compute the area of a circle: areaOfCircle :: Float -> Float areaOfCircle radius = let pi = 3.1415 in pi * radius * radius
Executing the above with input radius 1 gives: *Main> areaOfCircle 1 3.1415
Indentation is very important in Haskell as it helps in code readability — the compiler will emit errors otherwise. You must make use of white spaces instead of tab when aligning code. If the let and in constructs in a function span multiple lines, they must be aligned vertically as shown below: compute :: Integer -> Integer -> Integer compute x y = let a = x + 1 b = y + 2 in a * b
Loading the example with GHCi, you get the following output: *Main> compute 1 2 8
Running the example with GHCi, you get: *Main> sign 0 "Zero" *Main> sign 1 "Positive" *Main> sign (-1) "Negative"
The case construct can be used for pattern matching against possible expression values. It needs to be combined with the of keyword. The different values need to be aligned and the resulting action must be specified after the ‘->’ symbol for each case. For example: sign :: Integer -> String sign x = case compare x 0 of LT -> "Negative" GT -> "Positive" EQ -> "Zero"
The compare function compares two arguments and returns LT if the first argument is lesser than the second, GT if the first argument is greater than the second, and EQ if both are equal. Executing the above example, you get: *Main> sign 2 "Positive" *Main> sign 0 "Zero" *Main> sign (-2) "Negative"
The sign function can also be expressed using guards (‘|’) for readability. The action for a matching case must be specified after the ‘=’ sign. You can use a default guard with the otherwise keyword: sign :: Integer -> String sign x | x > 0 = "Positive"
36 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
There are three very important higher order functions in Haskell — map, filter and fold. The map function takes a function and a list, and applies the function to each and every element of the list. Its type signature is:
It can be represented as ‘f (f (f a b1) b2) b3’ where ‘f’ is the function, ‘a’ is the accumulator value, and ‘b1’, ‘b2’ and ‘b3’ are the elements of the list. The parenthesis is accumulated on the left for a left fold. The computation looks like this:
*Main> :t map map :: (a -> b) -> [a] -> [b]
The first function argument accepts an element of type ‘a’ and returns an element of type ‘b’. An example of adding two to every element in a list can be implemented using map: *Main> map (+ 2) [1, 2, 3, 4, 5] [3,4,5,6,7]
The filter function accepts a predicate function for evaluation, and a list, and returns the list with those elements that satisfy the predicate. For example:
Its type signature is: filter :: (a -> Bool) -> [a] -> [a]
The predicate function for filter takes as its first argument an element of type ‘a’ and returns True or False. The fold function performs a cumulative operation on a list. It takes as arguments a function, an accumulator (starting with an initial value) and a list. It cumulatively aggregates the computation of the function on the accumulator value as well as each member of the list. There are two types of folds — left and right fold.
With the recursion, the expression is constructed and evaluated only when it is finally formed. It can thus cause stack overflow or never complete when working with infinite lists. The foldr evaluation looks like this:
It can be represented as ‘f b1’ (f b2 (f b3 a)) where f is the function, ‘a’ is the accumulator value, and ‘b1’, ‘b2’ and ‘b3’ are the elements of the list. The computation looks like this: *Main> 6 *Main> 3 *Main> 5 *Main> 6
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 37
Developers
Insight
Introducing
AngularJS
AngularJS is an open source Web application framework maintained by Google and the community, which helps to build Single Page Applications (SPA). Let’s get to know it better.
A
ngularJS can be introduced as “a front-end framework capable of incorporating the dynamicity of JavaScript with HTML.” The selfproclaimed ‘super heroic’ JavaScript MVW (Model View Whatever) framework is maintained by Google and many other developers at Github. This open source framework works its magic on Web applications of the Single Page Applications (SPA) category. The logic behind an SPA is that an initial page is loaded at the start of an application from the server. When an action is performed, the application fetches the required resources from the server and adds them to the initial page. The key point here is that an SPA just makes one server round trip, providing you with the initial page. This makes your applications very responsive.
Why AngularJS?
AngularJS brings out the beauty in Web development. It is extremely simple to understand and code. If you’re familiar with HTML and JavaScript, you can write
the ‘Hello World’ program in minutes. With the help of Angular, the combined power of HTML and JavaScript can be put to maximum use. One of the prominent features of Angular is that it is extremely easy to test. And that makes it very suitable for creating large-scale applications. Also, the Angular community, comprising Google’s developers primarily, is very active in the development process. Google Trends gives assuring proof of Angular’s future in the field of Web development (Figure 1).
Core features
Before getting into the basics of AngularJS, you need to understand two key terms—templates and models. The HTML page that is rendered out to you is pretty much the template. So basically, your template has HTML, Angular entities (directives, filters, model variables, etc) and CSS (if necessary). The example code given below for data binding is a template. In an SPA, the data and presentation of data is separated by a model layer that handles data and a view layer that reads
40 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Developers
Insight
Topics angularjs search term
Subscribe
emberjs search term
knockoutjs search term
backbonejs
+ Add term
search term
Interest over time
News headlines
Forecast
March 2009 angularjs:0 emberjs:0 knockoutjs:0 backbonejs:0
Average
2005
2007
2009
2011
2013
the purpose of some common directives. ngApp:This directive bootstraps your angular application and considers the HTML element in which the attribute is specified to be the root element of Angular. In the above example, the entire HTML page becomes an angular application, since the ‘ng-app’ attribute is given to the tag. If it was given to the tag, the body alone becomes the root element. Or you could create your own Angular module and let that be the root of your application. An AngularJS module might consist of controllers, services, directives, etc. To create a new module, use the following commands:
Figure 1: Report from Google Trends
from models. This helps an SPA in redrawing any part of the UI without requiring a server round trip to retrieve HTML. When the data is updated, its view is notified and the altered data is produced in the view.
Data binding
AngularJS provides you with two-way binding between the model variables and HTML elements. One-way binding would mean a one-way relation between the two—when the model variables are updated, so are the values in the HTML elements; but not the other way around. Let’s understand two-way binding by looking at an example: <script src="http://ajax.googleapis.com/ajax/libs/ angularjs/1.0.7/angular.min.js"> Enter your text: You entered : {{yourtext}}
The model variable yourtext is bound to the HTML input element. Whenever you change the value in the input box, yourtext gets updated. Also, the value of the HTML input box is initialised to that of the yourtext variable.
Directives
In the above example, many words like ng-app, ng-init and ng-model may have struck you as odd. Well, these are attributes that represent directives - ngApp, ngInit and ngModel, respectively. As described in the official AngularJS developer guide, “Directives are markers on a DOM element (such as an attribute, element name, comment or CSS class) that tell AngularJS's HTML compiler ($compile) to attach a specified behaviour to that DOM element.” Let’s look into
var moduleName = angular.module( ‘moduleName ‘, [ ] ); // The array is a list of modules our module depends on
Also, remember to initialise your ng-app attribute to moduleName. For instance,
ngModel: The purpose of this directive is to bind the view with the model. For instance,
Your text: {{ sometext }}
Here, the model ‘sometext’ is bound (two-way) to the view. The double curly braces will notify Angular to put the value of ‘sometext’ in its place. ngClick: How this directive functions is similar to that of the onclick event of Javascript. After multiplying : {{mul}}
Whenever the button is clicked, ‘mul’ gets multiplied by two.
Filters
A filter helps you in modifying the output to your view. You can subject your expression to any kind of constraints to give out the desired output. The format is: {{ expression | filter }}
You can filter the output of filter1 again with filter2, using the following format: {{ expression | filter1 | filter2 }}
The following code filters the members of the people array using the name as the criteria:
42 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Insight Name:
{{person. name }} - {{person.branch}}
Advanced features
Controllers: To bring some more action to our app, we need controllers. These are JavaScript functions that add behaviour to our app. Let’s make use of the ngController directive to bind the controller to the DOM:
Developers
html and animals.html are examples of ‘partials’; these are files that will be loaded to your view, depending on the URL passed. For example, you could have an app that has icons and whenever the icon is clicked, a link is passed. Depending on the link, the corresponding partial is loaded to the view. This is how you pass links:
Don’t forget to add the ‘ng-view’ attribute to the HTML component of your choice. That component will act as a placeholder for your views.
One term to be explained here is ‘$scope’. To quote from the developer guide: “Scope is an object that refers to the application model.” With the help of scope, the model variables can be initialised and accessed. In the above example, when the button is clicked the disp( ) comes into play, i.e., the scope is assigned with a behaviour. Inside disp( ), the model variable name is accessed using scope. Views and routes: In any usual application, we navigate to different pages. In an SPA, instead of pages, we have views. So, you can use views to load different parts of your application. Switching to different views is done through routing. For routing, we make use of the ngRoute and ngView directives: var miniApp = angular.module( 'miniApp', ['ngRoute'] ); miniApp.config(function( $routeProvider ){ $routeProvider.when( '/home', { templateUrl: 'partials/home.html' } ); $routeProvider.when( '/animal', {templateUrl: 'partials/animals.html' } ); $routeProvider.otherwise( { redirectTo: '/home' } ); });
ngRoute enables routing in applications and $routeProvider is used to configure the routes. home.
Services: According to the official documentation of AngularJS, “Angular services are substitutable objects that are wired together using dependency injection (DI). You can use services to organise and share code across your app.” With DI, every component will receive a reference to the service. Angular provides useful services like $http, $window, and $location. In order to use these services in controllers, you can add them as dependencies. As in: var testapp = angular.module( ‘testapp’, [ ] ); testapp.controller ( ‘testcont’, function( $window ) { //body of controller });
To define a custom service, write the following: testapp.factory ('serviceName', function( ) { var obj; return obj; // returned object will be injected to the component //that has called the service });
Testing
Testing is done to correct your code on-the-go and avoid ending up with a pile of errors on completing your app’s development. Testing can get complicated when your app grows in size and APIs start to get tangled up, but Angular has got its own defined testing schemes. Usually, two kinds of testing are employed, unit and end-to-end testing (E2E). Unit testing is used to test individual API components, while in E2E testing, the working of a set of components is tested. The usual components of unit testing are describe( ), beforeEach( ) and it( ). You have to load the angular module before testing and beforeEach( ) does this. Also, this function www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 43
Developers
Insight
makes use of the injector method to inject dependencies. The test to be conducted is given in it( ). The test suite is describe( ), and both beforeEach( ) and it( ) come inside it. E2E testing makes use of all the above functions. One other function used is expect( ). This creates ‘expectations’, which verify if the particular application's state (value of a variable or URL) is the same as the expected values. Recommended frameworks for unit testing are Jasmine and Karma, and for E2E testing, Protractor is the one to go with.
Who uses AngularJs?
Some of the following corporate giants use AngularJS: Google Sony (YouTube on PS3) Virgin America Nike msnbc (msnbc.com) You can find a lot of interesting and innovative apps in the ‘Built with AngularJS’ page.
Competing technologies
Features Routing Views Two-way binding
Ember.js Yes Yes Yes
AngularJS Yes Yes Yes
Backbone.js Yes Yes No
The chart above covers only the core features of the three frameworks. Angular is the oldest of the lot and has the biggest community. References [1] http://singlepageappbook.com/goal.html [2] https://github.com/angular/angular.js [3] https://docs.angularjs.org/guide/ [4] http://karma-runner.github.io/0.12/index.html [5] http://viralpatel.net/blogs/angularjs-introduction-helloworld-tutorial/ [6] https://builtwith.angularjs.org/
By: Tina Johnson The author is a FOSS enthusiast who has contributed to Mediawiki and Mozilla's Bugzilla. She is also working on a project to build a browser (using AngularJS) for autistic children.
To be continued from page.... 37 There are some statements like condition checking where ‘f b1’ can be computed even without requiring the subsequent arguments, and hence the foldr function can work with infinite lists. There is also a strict version of foldl (foldl’) that forces the computation before proceeding with the recursion. If you want a reference to a matched pattern, you can use the as pattern syntax. The tail function accepts an input list and returns everything except the head of the list. You can write a tailString function that accepts a string as input and returns the string with the first character removed: tailString :: String -> String tailString "" = "" tailString input@(x:xs) = "Tail of " ++ input ++ " is " ++ xs
The entire matched pattern is represented by input in the above code snippet. Functions can be chained to create other functions. This is called ‘composing’ functions. The mathematical definition is as under:
highest precedence and is right-associative. For example: *Main> (reverse ((++) "yrruC " (unwords ["skoorB", "lleksaH"]))) "Haskell Brooks Curry"
You can rewrite the above using the function application operator that is right-associative: Prelude> reverse $ (++) "yrruC " $ unwords ["skoorB", "lleksaH"] "Haskell Brooks Curry"
You can also use the dot notation to make it even more readable, but the final argument needs to be evaluated first; hence, you need to use the function application operator for it: *Main> reverse . (++) "yrruC " . unwords $ ["skoorB", "lleksaH"] "Haskell Brooks Curry"
(f o g)(x) = f(g(x))
This dot (.) operator has the highest precedence and is left-associative. If you want to force an evaluation, you can use the function application operator ($) that has the second
By: Shakthi Kannan
44 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
The author is a free software enthusiast and blogs at shakthimaan.com.
Let's Try
Developers
Use Bugzilla
to Manage Defects in Software
In the quest for excellence in software products, developers have to go through the process of defect management. The tool of choice for defect containment is Mozilla's Bugzilla. Learn how to install, configure and use it to file a bug report and act on it.
I
n any project, defect management and various types of testing play key roles in ensuring quality. Defects need to be logged, tracked and closed to ensure the project meets quality expectations. Generating defect trends also helps project managers to take informed decisions and make the appropriate course corrections while the project is being executed. Bugzilla is one of the most popular open source defect management tools and helps project managers to track the complete lifecycle of a defect.
them are on your Linux system before proceeding with the installation. This specific installation covers MySQL as the backend database.
Step 2: User and database creation
Installation and configuration of Bugzilla
Before proceeding with the installation, the user and database need to be created by following the steps mentioned below. The names used here for the database or the users are specific to this installation, which can change between installations. Start the service by issuing the following command:
Step 1: Getting the source code
$/etc/rc.d/init.d/mysql start
Bugzilla is part of the Mozilla foundation. Its latest releases are available from the official website. This article will be covering the installation of Bugzilla version 4.4.2. The steps mentioned here should apply to later releases as well. However, for version-specific releases, check the appropriate release notes. Here is the URL for downloading Bugzilla version 4.4.2 on a Linux system: http://www. bugzilla.org/releases/4.4.2/ Pre-requisites for Bugzilla include a CGI-enabled Web server (an Apache http server), a database engine (MySQL, PostgreSQL, etc) and the latest Perl modules. Ensure all of
Trigger MySQL by issuing the following command (you will be asked for the root password, so ensure you keep it handy): $mysql -u root -p
Use the following keywords as shown in the MySQL prompt for creating a user in the database for Bugzilla: mysql > CREATE USER 'bugzilla'@'localhost' IDENTIFIED BY
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 45
Developers
Let's Try
Figure 1: Configuring Bugzilla by changing the localconfig file 'password'; > GRANT ALL PRIVILEGES ON *. * TO 'bugzilla'@'localhost'; > FLUSH PRIVILEGES; mysql > CREATE DATABASE bugzilla_db CHARACTER SET utf8; > GRANT SELECT,INSERT,UPDATE,DELETE,INDEX,ALTER,CREATE,DROP, REFERENCES ON bugzilla_db.* TO 'bugzilla'@'localhost' IDENTIFIED BY 'cspasswd'; > FLUSH PRIVILEGES; > QUIT
Figure 2: Bugzilla main page
Figure 3: Defect lifecycle
Use the following command to connect the user with the database: $mysql -u bugzilla -p bugzilla_db $mysql > use bugzilla_db
Step 3: Bugzilla installation and configuration
After downloading the Bugzilla archive from the URL mentioned above, untar the package into the /var/www directory. All the configuration related information can be modified by the localconfig file. To start with, set the variable $webservergroup as ‘www' and set other items as mentioned in Figure 1. Followed by the configuration, installation can be completed by executing the following Perl script. Ensure this script is executed with root privileges: $ ./checksetup.pl
Step 4: Integrating Bugzilla with Apache
Insert the following lines in the Apache server configuration file (server.xml) to integrate Bugzilla into it. Place the directory bugzilla inside www in our build folder: AddHandler cgi-script.cgi Options +ExecCGI DirectoryIndex index.cgi index.html AllowOverride Limit FileInfo Indexes Options
Our set up is now ready. Let’s hit the address in the browser to see the home page of our freshly deployed Web application (http://localhost/bugzilla).
Figure 4: New account creation
Defect lifecycle management
The main purpose of Bugzilla is to manage the defect’s lifecycle. Defects are created and logged in various phases of the project (e.g., functional testing), where they are created by the test engineer and assigned to development engineers for resolution. Along with that, managers or team members need to be aware of the change in the state of the defect to ensure that there is a good amount of traceability of the defects. When the defect is created, it is given a ‘new’ state, after which it is assigned to a development engineer for resolution. Subsequently, it will get ‘resolved’ and eventually be moved to the ‘closed’ state.
Step 1: User account creation
To start using Bugzilla, various user accounts have to be created. In this example, Bugzilla is deployed in a server named ‘hydrogen’. On the home page, click the ‘New Account’ link available in the header/footer of the pages (refer to Figure 4). You will be asked for your email address; enter it and click the ‘Send’ button. After registration is accepted, you should receive an email at the address you provided confirming your registration. Now all you need to do is to
46 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Let's Try
Developers
Figure 6: Defect resolution
Figure 5: New defect creation
click the ‘Log in’ link in the header/footer at the bottom of the page in your browser, enter your email address and the password you just chose into the login form, and click on the ‘Log in’ button. You will be redirected to the Bugzilla home page for defect interfacing.
Figure 7: Simple search
Step 2: Reporting the new bug
1. Click the ‘New’ link available in the header/footer of the pages, or the ‘File a bug’ option displayed on the home page of the Bugzilla installation as shown in Figure 5. 2. Select the product in which you found a bug. Please note that the administrator will be able to create an appropriate product and corresponding versions from his account, which is not demonstrated here. 3. You now see a form on which you can specify the component, the version of the program you were using, the operating system and platform your program is running on, and the severity of the bug, as shown in Figure 5. 4. If there is any attachment like a screenshot of the bug, attach it using the option ‘Add an attachment’ shown at the bottom of the page, else click on ‘Submit Bug’.
Step 3: Defect resolution and closure
Once the bug is filed, the assignees (typically, developers) get an email when the bug gets fixed. If the developers fix the bug successfully by adding the details like a bug fixing summary and then marking the status as ‘resolved’ in the status button, they can route the defect back to the tester or to the development team leader for further review. This can be easily done by changing the ‘assignee’ field of a defect and filling it with an appropriate email ID. When the developers complete fixing the defect, it can be marked as shown in Figure 6. When the test engineers receive the resolved defect report, they can verify it and mark the status as ‘closed’. At every step, notes from each individual are to be captured and logged along with the time-stamp. This helps in backtracking the defect in case any clarifications are required.
Figure 8: Simple dashboard of defects
Step 4: Reports and dashboards
Typically, in large scale projects, there could be thousands of defects logged and fixed by hundreds of development and test engineers. To monitor the project at various phases, generation of reports and dashboards becomes very important. Bugzilla offers very simple but very powerful search and reporting features with which all the necessary information can be obtained immediately. By exploring the ‘Search’ and ‘Reports’ options, one can easily figure out ways to generate reports. A couple of simple examples are provided in Figure 7 (search) and Figure 8 (reports). Outputs can be exported to formats like CSV for further analysis. Bugzilla is a very simple but powerful open source tool that helps in complete defect management in projects. Along with the information provided above, Bugzilla also exposes its source code, which can be explored for further scripting and programming. This helps to make Bugzilla a super-customised, defect-tracking tool for effectively managing defects. By: Satyanarayana Sampangi Satyanarayana Sampangi is a Member - Embedded software at Emertxe Information Technologies (http://www.emertxe.com). His area of interest lies in Embedded C programming combined with data structures and micro-controllers. He likes to experiment with C programming and open source tools in his spare time to explore new horizons. He can be reached at [email protected]
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 47
Developers
How To
An Introduction to Device Drivers in the Linux Kernel In the article ‘An Introduction to the Linux Kernel’ in the August 2014 issue of OSFY, we wrote and compiled a kernel module. In the second article in this series, we move on to device drivers.
H
ave you ever wondered how a computer plays audio or shows video? The answer is: by using device drivers. A few years ago we would always install audio or video drivers after installing MS Windows XP. Only then we were able to listen the audio. Let us explore device drivers in this column. A device driver (often referred to as ‘driver') is a piece of software that controls a particular type of device which is connected to the computer system. It provides a software interface to the hardware device, and enables access to the operating system and other applications. There are various types of drivers present in GNU/Linux such as Character, Block, Network and USB drivers. In this column, we will explore only character drivers. Character drivers are the most common drivers. They provide unbuffered, direct access to hardware devices. One can think of character drivers as a long sequence of bytes -- same as regular files but can be accessed only in sequential order. Character drivers support at least the open(), close(), read() and write() operations. The text console, i.e., /dev/console, serial consoles /dev/stty*, and audio/video drivers fall under this category. To make a device usable there must be a driver present for it. So let us understand how an application accesses data from a device with the help of a driver. We will discuss the following four major entities. User-space application: This can be any simple utility like echo, or any complex application. Device file: This is a special file that provides an interface for the driver. It is present in the file system as an ordinary file. The application can perform all supported operation on it, just like for an ordinary file. It can move, copy, delete, rename, read and write these device files. Device driver: This is the software interface for the device and resides in the kernel space.
Device: This can be the actual device present at the hardware level, or a pseudo device. Let us take an example where a user-space application sends data to a character device. Instead of using an actual device we are going to use a pseudo device. As the name suggests, this device is not a physical device. In GNU/Linux / dev/null is the most commonly used pseudo device. This device accepts any kind of data (i.e., input) and simply discards it. And it doesn't produce any output. Let us send some data to the /dev/null pseudo device: [mickey]$ echo -n 'a' > /dev/null
In the above example, echo is a userspace application and null is a special file present in the /dev directory. There is a null driver present in the kernel to control the pseudo device. To send or receive data to and from the device or application, use the corresponding device file that is connected to the driver through the Virtual File System (VFS) layer. Whenever an application wants to perform any operation on the actual device, it performs this on the device file. The VFS layer redirects those operations to the appropriate functions that are implemented inside the driver. This means that whenever an application performs the open() operation on a device file, in reality the open() function from the driver is invoked, and the same concept applies to the other functions. The implementation of these operations is device-specific.
Major and minor numbers
We have seen that the echo command directly sends data to the device file. Hence, it is clear that to send or receive data to and from the device, the application uses special device files. But how does communication between the device file and the driver take place? It happens via a pair of numbers referred to as ‘major’ and ‘minor’ numbers. The command below lists the major and minor numbers associated with a character device file:
48 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
How To [bash]$ ls -l /dev/null crw-rw-rw- 1 root root 1, 3 Jul 11 20:47 /dev/null
In the above output there are two numbers separated by a comma (1 and 3). Here, ‘1’ is the major and ‘3’ is the minor number. The major number identifies the driver associated with the device, i.e., which driver is to be used. The minor number is used by the kernel to determine exactly which device is being referred to. For instance, a hard disk may have three partitions. Each partition will have a separate minor number but only one major number, because the same storage driver is used for all the partitions. Older kernels used to have a separate major number for each driver. But modern Linux kernels allow multiple drivers to share the same major number. For instance, / dev/full, /dev/null, /dev/random and /dev/zero use the same major number but different minor numbers. The output below illustrates this: [bash]$ ls crw-rw-rwcrw-rw-rwcrw-rw-rwcrw-rw-rw-
The kernel uses the dev_t type to store major and minor numbers. dev_t type is defined in the header file. Given below is the representation of dev_t type from the header file: #ifndef _LINUX_TYPES_H #define _LINUX_TYPES_H
dev_t;
dev_t is an unsigned 32-bit integer, where 12 bits are used to store the major number and the remaining 20 bits are used to store the minor number. But don't try to extract the major and minor numbers directly. Instead, the kernel provides MAJOR and MINOR macros that can be used to extract the major and minor numbers. The definition of the MAJOR and MINOR macros from the header file is given below:
If you have major and minor numbers and you want to convert them to the dev_t type, the MKDEV macro will do the needful. The definition of the MKDEV macro from the header file is given below: #define MKDEV(ma,mi) (((ma) << MINORBITS) | (mi))
We now know what major and minor numbers are and the role they play. Let us see how we can allocate major numbers. Here is the prototype of the register_chrdev(): int register_chrdev(unsigned int major, const char *name, struct file_operations *fops);
This function registers a major number for character devices. Arguments of this function are self-explanatory. The major argument implies the major number of interest, name is the name of the driver and appears in the /proc/devices area and, finally, fops is the pointer to the file_operations structure. Certain major numbers are reserved for special drivers; hence, one should exclude those and use dynamically allocated major numbers. To allocate a major number dynamically, provide the value zero to the first argument, i.e., major == 0. This function will dynamically allocate and return a major number. To deallocate an allocated major number use the unregister_chrdev() function. The prototype is given below and the parameters of the function are self-explanatory:
The values of the major and name parameters must be the same as those passed to the register_chrdev() function; otherwise, the call will fail.
typedef __u32 __kernel_dev_t;
#ifndef _LINUX_KDEV_T_H #define _LINUX_KDEV_T_H
#define MINORMASK
void unregister_chrdev(unsigned int major, const char *name)
#define __EXPORTED_HEADERS__ #include
typedef __kernel_dev_t
Developers
File operations
So we know how to allocate/deallocate the major number, but we haven't yet connected any of our driver’s operations to the major number. To set up a connection, we are going to use the file_operations structure. This structure is defined in the header file. Each field in the structure must point to the function in the driver that implements a specific operation, or be left NULL for unsupported operations. The example given below illustrates that. Without discussing lengthy theory, let us write our first ‘null’ driver, which mimics the functionality of a /dev/null pseudo device. Given below is the complete working code for the ‘null’ driver. Open a file using your favourite text editor and save the code given below as null_driver.c:
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 49
Our driver code is ready. Let us compile and insert the module. In the article last month, we did learn how to write Makefile for kernel modules. [mickey]$ make
We are now going to create a device file for our driver. But for this we need a major number, and we know that our driver's register_chrdev() function will allocate the major number dynamically. Let us find out this dynamically allocated major number from /proc/devices, which shows the currently loaded kernel modules: [root]# grep "null_driver" /proc/devices 248 null_driver
From the above output, we are going to use ‘248’ as a major number for our driver. We are only interested in the major number, and the minor number can be anything within a valid range. I'll use ‘0’ as the minor number. To create the character device file, use the mknod utility. Please note that to create the device file you must have superuser privileges: [root]# mknod /dev/null_driver c 248 0
Now it's time for the action. Let us send some data to the pseudo device using the echo command and check the output of the dmesg command: [root]# echo "Hello" > /dev/null_driver [root]# dmesg Device registered successfully. Calling: null_open Calling: null_write Calling: null_release
50 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Yes! We got the expected output. When open, write, close
How To operations are performed on a device file, the appropriate functions from our driver's code get called. Let us perform the read operation and check the output of the dmesg command: [root]# cat /dev/null_driver [root]# dmesg Calling: null_open Calling: null_read Calling: null_release
Developers
is performed on a device file, the driver should transfer len bytes of data to the device and update the file offset off accordingly. Our null driver accepts input of any length; hence, return value is always len, i.e., all bytes are written successfully. In the next step we have initialised the file_operations structure with the appropriate driver's function. In initialisation function we have done a registration related job, and we are deregistering the character device in cleanup function.
Implementation of the full pseudo driver
To make things simple I have used printk() statements in every function. If we remove these statements, then /dev/null_ driver will behave exactly the same as the /dev/null pseudo device. Our code is working as expected. Let us understand the details of our character driver. First, take a look at the driver's function. Given below are the prototypes of a few functions from the file_operations structure: int (*open)(struct inode *i, struct file *f); int (*release)(struct inode *i, struct file *f); ssize_t (*read)(struct file *f, char __user *buf, size_t len, loff_t *off); ssize_t (*write)(struct file *f, const char __user buf*, size_t len, loff_t *off);
Let us implement one more pseudo device, namely, full. Any write operation on this device fails and gives the ‘ENOSPC’ error. This can be used to test how a program handles disk-full errors. Given below is the complete working code of the full driver: #include #include #include #include
static int major; static char *name = "full_driver";
The prototype of the open() and release() functions is exactly same. These functions accept two parameters—the first is the pointer to the inode structure. All file-related information such as size, owner, access permissions of the file, file creation timestamps, number of hard-links, etc, is represented by the inode structure. And each open file is represented internally by the file structure. The open() function is responsible for opening the device and allocation of required resources. The release() function does exactly the reverse job, which closes the device and deallocates the resources. As the name suggests, the read() function reads data from the device and sends it to the application. The first parameter of this function is the pointer to the file structure. The second parameter is the user-space buffer. The third parameter is the size, which implies the number of bytes to be transferred to the user space buffer. And, finally, the fourth parameter is the file offset which updates the current file position. Whenever the read() operation is performed on a device file, the driver should copy len bytes of data from the device to the user-space buffer buf and update the file offset off accordingly. This function returns the number of bytes read successfully. Our null driver doesn't read anything; that is why the return value is always zero, i.e., EOF. The driver's write() function accepts the data from the user-space application. The first parameter of this function is the pointer to the file structure. The second parameter is the userspace buffer, which holds the data received from the application. The third parameter is len which is the size of the data. The fourth parameter is the file offset. Whenever the write() operation
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 51
Developers
Insight
Creating Dynamic Web Portals Using Joomla and WordPress Joomla and WordPress are popular Web content management systems, which provide authoring, collaboration and administration tools designed to allow amateurs to create and manage websites with ease.
N
owadays, every organisation wishes to have an online presence for maximum visibility as well as reach. Industries from across different sectors have their own websites with detailed portfolios so that marketing as well as broadcasting can be integrated very effectively. Web 2.0 applications are quite popular in the global market. With Web 2.0, the applications developed are fully dynamic so that the website can provide customised results or output to the client. Traditionally, long term core coding, using different programming or scripting languages like CGI PERL, Python, Java, PHP, ASP and many others, has been in vogue. But today excellent applications can be developed within very little time. The major factor behind the implementation of RAD frameworks is re-usability. By making changes to the existing code or by merely reusing the applications, development has now become very fast and easy.
Software frameworks
Software frameworks and content management systems (CMS) are entirely different concepts. In the case of CMSs, the reusable modules, plugins and related components are provided with the source code and all that is required is to only plug in or plug out. The frameworks need to be installed and imported on the host machine and then the functions are called. This means that the framework with different classes and functions needs to
be called by the programmer depending upon the module and feature required in the application. As far as user-friendliness is concerned, the CMSs are very easy to use. CMS products can be used and deployed even by those who do not have very good programming skills. A framework can be considered as a model, a structure or simply a programming template that provides classes, events and methods to develop an application. Generally, the software framework is a real or conceptual structure of software intended to serve as a support or guide to build something that expands the structure into something useful. The software framework can be seen as a layered structure, indicating which kind of programs can or should be built and the way they interrelate.
Content Management Systems (CMSs)
The digital repositories and CMSs have a lot of featureoverlap, but both systems are unique in terms of their underlying purposes and the functions they fulfill. A CMS for developing Web applications is an integrated application that is used to create, deploy, manage and store content on Web pages. The Web content includes plain or formatted text, embedded graphics in multiple formats, photos, video, audio as well as the code that can be third party APIs for interaction with the user.
52 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Insight
Developers
PHP-based open source frameworks • • • •
Laravel Phalcon Symfony CodeIgniter
• • • •
Prado Seagull Yii CakePHP
Digital repositories
An institutional repository refers to the online archive or library for collecting, preserving and disseminating digital copies of the intellectual output of the institution, particularly in the field of research. For any academic institution like a university, it also includes digital content such as academic journal articles. It covers both pre-prints and post-prints, articles undergoing peer review, as well as digital versions of theses and dissertations. It even includes some other digital assets PHP-based open source CMSs • Joomla • Drupal • WordPress
• • • • • • •
• Typo3 • Mambo
• •
generated in an institution such as administrative documents, course notes or learning objectives. Depositing material in an institutional repository is sometimes mandated by some institutions.
Joomla CMS
Figure 1: Joomla extensions
•
Joomla is an award-winning open source CMS written in PHP. It enables the building of websites and powerful online applications. Many aspects, including its user-friendliness and extensible nature, makes Joomla the most popular Web-based software development CMS. Joomla is built on the model– view–controller (MVC) Web application framework, which can be used independent of the CMS. Joomla CMS can store data in a MySQL, MS SQL or PostgreSQL database, and includes features like page caching, RSS feeds, printable versions of pages, news flashes, blogs, polls, search and support for language internationalisation. According to reports by Market Wire, New York, as of February 2014, Joomla has been downloaded over 50 million times. Over 7,700 free and commercial extensions are available from the official Joomla Extension Directory and more are available from other sources. It is supposedly the second most used CMS on the Internet after WordPress. Many websites provide information on installing and maintaining Joomla sites. Joomla is used across the globe to power websites of all types and sizes: • Corporate websites or portals • Corporate intranets and extranets • Online magazines, newspapers and publications • E-commerce and online reservation sites
• •
• • • • • • • • • • • • • • • •
Sites offering government applications Websites of small businesses and NGOs Community-based portals School and church websites Personal or family home pages Joomla’s user base includes: The military - http://www.militaryadvice.org/ US Army Corps of Engineers - Country: http://www.spl. usace.army.mil/cms/index.php MTV Networks Quizilla (social networking) - http://www. quizilla.com New Hampshire National Guard - https://www.nh.ngb. army.mil/ United Nations Regional Information Centre - http://www. unric.org IHOP (a restaurant chain) - http://www.ihop.com Harvard University - http://gsas.harvard.edu … and many others The essential features of Joomla are: User management Media manager Language manager Banner management Contact management Polls Search Web link management Content management Syndication and newsfeed management Menu manager Template management Integrated help system System features Web services Powerful extensibility
Joomla extensions
Joomla extensions are used to extend the functionality of Joomla-based Web applications. The Joomla extensions for multiple categories and services can be downloaded from http://extensions.joomla.org.
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 53
Developers
Insight Configuration
Database
Overview
Database Configuration Joomla!® is free software released under the GNU General Public License.
Configuration
Database
Database Type*
Host Name* Select Language
MySQLi This is probably "MySQLi"
Overview
English (United States)
localhost This is usuaally "localhost"
Main Configuration
Username* Either something as "root or a username given by the host
Site Name *
My now Joomla Website!
Admin Email*
Description
Pasword
Enter an email address. This will be the email address of the Web site Super Administrator.
Enter the name of your Joomla! site.
This is my new Joomla site and it is going to be great!
For site security using a password for the database account is manadatory Database Name*
Admin Username* Enter a description of the overall Web site that is to be used by search engines. Generally, a maximum of 20 words is optimal.
Some hosts allow only a certain DB name per site. Use table prefix in this case for district joomla! sites. You may change the default username admin.
Table Prefix* Choose a table prefix or use the randomly generated ideally, three or four characters long, contain only alphanumeric characters, and MUST end in an underscore. Make sure that the prefix chosen is not used by other table
Admin Password* Set the password for your Super Administrator account and confirm it in the field below.
Old Database Process*
Site Offline
No
Yes
Set the site fronted offline when installation is completed. The site can be set online later on through the Global Configuration
Figure 2: Creating a MySQL user in a Web hosting panel
Installing and working with Joomla
For Joomla installation on a Web server, whether local or hosted, we need to download the Joomla installation package, which ought to be done from the official website, Joomla.org. If Joomla is downloaded from websites other than the official one, there are risks of viruses or malicious code in the set-up files. Once you click the Download button for the latest stable Joomla version, the installation package will be saved to the local hard disk. Extract it so that it can be made ready for deployment. Now, at this instant, upload the extracted files and folders to the Web server. The easiest and safest method to upload the Joomla installation files is via FTP. If Joomla is required to be installed live on a specific domain, upload the extracted files to the public_html folder on the online file manager of the domain. If access to Joomla is needed on a sub-folder of any domain (www.mydomain. com/myjoomla) it should be uploaded to the appropriate subdirectory (public_html/myjoomla/). After this step, create a blank MySQL database and assign a user to it with full permissions. A blank database is created because Joomla will automatically create the tables inside that database. Once you have created your MySQL database and user, save the database name, database user name and password just created because, during Joomla installation, you will be asked for these credentials. After uploading the installation files, open the Web browser and navigate to the main domain (http://www. mysite.com), or to the appropriate sub-domain (http://www. mysite.com/joomla), depending upon the location the Joomla installation package is uploaded to. Once done, the first screen of the Joomla Web Installer will open up. Once you fill in all the required fields, press the Next button to proceed with the installation. On the next screen, you will have to enter the necessary information for your MySQL database.
Backup
Remove
Any existing backup tables from former joomla! installations will be replaced
Confirm Admin Password*
Figure 3: Database configuration panel for setting up Joomla
After all the necessary information has been filled in at all stages, press the Next button to proceed. You will be forwarded to the last page of the installation process. On this page, specify if you want any sample data installed on your server. The second part of the page will show the pre-installation checks. The Web hosting servers will check that all Joomla requirements and prerequisites have been met and you will see a green check after each line. Finally, click the Install button to start the actual Joomla installation. In a few moments, you will be redirected to the last screen of the Joomla Web Installer. On the last screen of the installation process, press the Remove installation folder button. This is required for security reasons; otherwise, every time, the installation will restart. Joomla is now ready to be used.
Creating articles and linking them with the menu After installation, the administrator panel to control the Joomla website is displayed. Here, different modules, plugins and components, along with the HTML contents, can be added or modified.
WordPress CMS
WordPress is another free and open source blogging CMS tool based on PHP and MySQL. The features of WordPress include a specialised plugin architecture with a template system. WordPress is the most popular blogging system in use on the Web, used by more than 60 million websites. It was initially released in 2003 with the objective of providing an easy-to-use CMS for multiple domains. The installation steps for all CMSs are almost the same. The compressed file is extracted and deployed on the public_ html folder of the Web server. In the same way, a blank database is created and the credentials are placed during the installation steps. According to the official declaration of WordPress, this CMS powers more than 17 per cent of the Web and the figure is rising every day.
54 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Insight The salient features of WordPress are: • Simplicity • Flexibility • Ease of publishing • Publishing tools • User management • Media management • Full standards compliance Figure 4: Administrator login for Joomla • Easy theme system • Can be extended with plugins • Built-in comments • Search engine optimised • Multi-lingual • Easy installation and upgrades • Importers • Strong community of troubleshooters Worldwide users of WordPress include: • FIU College of Engineering and Computing • MTV Newsroom • Sony Music
Developers
Figure 5: WYSIWYG editor for creating articles
• Nicholls State University • Milwaukee School of Engineering ….and many others By: Dr Gaurav Kumar The author is the MD of Magma Research & Consultancy Pvt Ltd, Ambala. He is associated with a number of academic institutes, where he delivers lectures and conducts technical workshops on the latest technologies and tools. He can be contacted at [email protected].
To be continued from page.... 51 static int __init full_init(void) { major = register_chrdev(0, name, &full_ops); if (major < 0) { printk(KERN_INFO "Failed to register driver."); return -1; } }
return 0;
static void __exit full_exit(void) { unregister_chrdev(major, name); } module_init(full_init); module_exit(full_exit); MODULE_AUTHOR("Narendra Kangralkar."); MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Full driver"); Let us compile and insert the module. [mickey]$ make
[root]# insmod ./full_driver.ko [root]# grep "full_driver" /proc/devices 248 full_driver [root]# mknod /dev/full_driver c 248 0 [root]# echo "Hello" > /dev/full_driver -bash: echo: write error: No space left on device
If you want to learn more about GNU/Linux device drivers, the Linux kernel's source code is the best place to do so. You can browse the kernel's source code from http://lxr. free-electrons.com/. You can also download the latest source code from https://www.kernel.org/. Additionally, there are a few good books available in the market like ‘Linux Kernel Development' (3rd Edition) by Robert Love, and ‘Linux Device Drivers' (3rd Edition) which is a free book. You can download it from http://lwn.net/Kernel/LDD3/. These books also explain kernel debugging tools and techniques. By: Narendra Kangralkar The author is a FOSS enthusiast and loves exploring anything related to open source. He can be reached at [email protected]
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 55
Developers
Let's Try
Compile a GPIO Control Application and Test It On the Raspberry Pi GPIO is the acronym for General Purpose (I/O). The role played by these drivers is to handle I/O requests to read or write to groups of GPIO pins. Let’s try and compile a GPIO driver.
T
his article goes deep into what really goes on inside an OS while managing and controlling the hardware. The OS hides all the complexities, carries out all the operations and gives end users their requirements through the UI (User Interface). GPIO can be considered as the simplest of all the peripherals to work on any board. A small GPIO driver would be the best medium to explain what goes on under the hood. A good embedded systems engineer should, at the very least, be well versed in the C language. Even if the following demonstration can't be replicated (due to the unavailability of hardware or software resources), a careful read through this article will give readers an idea of the underlying processes.
Prerequisites to perform this experiment
C language (high priority) Raspberry Pi board (any model) BCM2835-ARM-peripherals datasheet (just Google for it!)
Jumper (female-to-female) SD card (with bootable Raspbian image) Here's a quick overview of what device drivers are. As the name suggests, they are pieces of code that drive your device. One can even consider them a part of the OS (in this case, Linux) or a mediator between your hardware and UI. A basic understanding of how device drivers actually work is required; so do learn more about that in case you need to. Let’s move forward to the GPIO driver assuming that one knows the basics of device drivers (like inserting/removing the driver from the kernel, probe functionality, etc). When you insert (insmod) this driver, it will register itself as a platform driver with the OS. The platform device is also registered in the same driver. Contrary to this, registering the platform device in the board file is a good practice. A peripheral can be termed a platform device if it is a part of the SoC (system on chip). Once the driver is inserted, the registration (platform device and platform driver) takes place,
56 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Let's Try
Developers
after which the probe function gets called. User Applications
Generic information
Probe in the driver gets called whenever a device's (already registered) name matches with the name of your platform driver (here, it is bcm-gpio). The second major functionality is ioctl which acts as a bridge between the application space and your driver. In technical terms, whenever your application invokes this (ioctl) system call, the call will be routed to this function of your driver. Once the call from the application is in your driver, you can process or provide data inside the driver and can respond to the application. The SoC datasheet, i.e., BCM2835-ARM-Peripherals, plays a pivotal role in building up this driver. It consists of all the information pertaining to the peripherals supported by your SoC. It exposes all the registers relevant to a particular peripheral, which is where the key is. Once you know what registers of a peripheral are to be configured, half the job is done. Be cautious about which address has to be used to access these peripherals.
GNU C Library (glibc)
GNU/ Linux
User Space
System Call Interface Kernel
Kernel Space
Architecture-Dependent Kernel Code
Hardware Platform
Figure 1: System layout
Types of addressing modes
There are three kinds of addressing modes - virtual addressing, physical addressing and system bus addressing. To learn the details, turn to Page 6 of the datasheet. The macro __io_address implemented in the probe function of the driver returns the virtual address of the physical address passed as an argument. For GPIO, the physical address is 0x20200000(0x20000000 + 0x200000), where 0x20000000 is the base address and 0x200000 is the peripheral offset. Turn to Page 5 of the datasheet for more details. Any guesses on which address the macro __io_ address would return? The address returned by this macro can then be used for accessing (reading or writing) the concerned peripheral registers. The GPIO control application is analogous to a simple C program with an additional ioctl call. This call is capable of passing data from the application layer to the driver layer with an appropriate command. I have restricted the use of other GPIOs as they are not exposed to headers like others. So, modify the application as per your requirements. More information is available on this peripheral from Page 89 of the datasheet. In this code, I have just added functionality for setting or clearing a GPIO. Another interesting feature is that by configuring the appropriate registers, you can configure GPIOs as interrupt pins. So whenever a pulse is routed to that pin, the processor, i.e., ARM, is interrupted and the corresponding handler registered for that interrupt is invoked to handle and process it. This interesting aspect will be taken up in later articles.
Local compilation on the target board In the first method, one needs to have certain packages downloaded. These are: ARM cross-compiler Raspbian kernel source (the kernel version must match with the one running on your Pi; otherwise, the driver will not load onto the OS due to the version mismatch) In the second method, one needs to install certain packages on Pi. Go to the following link and follow the steps indicated: http://stackoverflow.com/questions/20167411/how-tocompile-a-kernel-module-for-raspberry-pi Or, follow the third answer at this link, the starting line of which says, "Here are the steps I used to build the ‘Hello World’ kernel module on Raspbian." I went ahead with the second method as it was more straightforward.
Compilation of the GPIO device driver
Testing on your Raspberry Pi
There are two ways in which you can compile your driver. Cross compilation on the host PC
Figure 2: Console
Boot up your Raspberry Pi using minicom and you will see the console that resembles mine (Figure 2). www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 57
Developers
Let's Try
Figure 3: dmesg output
Run ‘sudo dmesg –C’. (This command would clean up all the kernel boot print logs.) Run ‘sudo make’. (This command would compile GPIO driver. Do this only for the second method.) Run ‘sudo insmod gpio_driver.ko’. (This command inserts the driver into the OS.) Run ‘dmesg’. You can see the prints from the GPIO driver and the major number allocated to it, as shown in Figure 3. (The major number plays a unique role in identifying a specific driver with whom the process from the application space wants to communicate Figure 4: R-pi GPIO with, whereas the minor number is used to recognise hardware.) Run ‘sudo mknod /dev/bcm-gpio c major-num 0’. (The ‘mknod’ command creates a node in /dev directory, ‘c’ stands for character device and ‘0’ is the minor number.) Run ‘sudo gcc gpio_app.c -o gpio_app’. (Compile the GPIO control application.) Now let’s test our GPIO driver and application. To verify whether our driver is indeed communicating with GPIO, short pins 25 and 24 (one can use other available pins like 17, 22 and 23 as well but make sure that they aren't mixed up for any other peripheral) using the female-to-female jumper (Figure 4). The default values of both the pins will be 0. To confirm the default values, run the following commands: sudo ./app -n 25 -g 1
This will be the output. The output value of GPIO 25 = 0. Now run the following command: sudo ./app -n 24 -g 1
This will again be the output. The output value of GPIO 24 = 0. That’s it. It’s verified (see Figure 5). Now, as the GPIO pins are shorted, if we output 1 to 24 then it would be the input value of 25 and vice versa. To test this, run: sudo ./app -n 24 -d 1 -v 1 -s 1
Figure 5: Output showing GPIO 24=0
Figure 6: Output showing GPIO 25=1
This command will drive the value of GPIO 24 to 1, which in turn will be routed to GPIO 25. To verify the value of GPIO 25, run: sudo ./app -n 25 -g 1
This will give the output. The output value of GPIO 25 = 1 (see Figure 6). One can also connect any external device or a simple LED (through a resistor) to the GPIO pin and test its output. Arguments passed to the application through the command lines are: -n : GPIO number -d : GPIO direction (0 - IN or 1 - OUT) -v : GPIO value (0 or 1) -s/g : set/get GPIO The files are: gpio_driver.c : GPIO driver file gpio_app.c : GPIO control application gpio.h : GPIO header file Makefile : File to compile GPIO driver After conducting this experiment, some curious folk may have questions like: Why does one have to use virtual addresses to access GPIO? How does one determine the virtual address from the physical address? We will discuss the answers to these in later articles. By: Sumeet Jain The author works at eInfochips as an embedded systems engineer. You can reach him at [email protected]
58 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Admin
How To
LOAD BALANCING USING POUND SERVER
192.168.10.31
192.168.10.32
...ApacheWebServer2...
...ApacheWebServer1... WEB SERVER 1
USER 1, USER2, USER3 ...etc
192.168.10.31 HTTP TRAFFIC
Figure 3: Custom web page of Apache Web Server1
Figure 4: Custom web page of Apache Web Server2
Pound server
Installation and configuration of Pound gateway server
NOTE - POUND Server performs the require load balancing of the Web Servers.
Figure 1: Load balancing using the Pound server [root@poundgateway ~]# yum clean all Loaded plugins: product-id, refresh-packagekit, subscriptionmanager Updating Red Hat repositories. Cleaning repos: Cleaning up Everything [root@poundgateway ~]#
Figure 2: Default page
[root@poundgateway ~]# yum update all Loaded plugins: product-id, refresh-packagekit, subscriptionmanager Updating Red Hat repositories. Setting up Update Process No Match for argument: all No package all available. No Packages marked for Update [root@poundgateway ~]#
[root@apachewebsever1 Packages]#
Start the service: [root@apachewebsever1 ~]# service httpd start Starting httpd: [ OK ] [root@apachewebsever1 ~]#
Then, check the default directory of YUM:
Start the service at boot time: [root@apachewebsever1 [root@apachewebsever1 [root@apachewebsever1 httpd 0:off 6:off [root@apachewebsever1
The directory location of Apache HTTP Service is /etc/ httpd/. Figure 2 gives the default test page for Apache Web Server on Red Hat Enterprise Linux. Now, let’s create a Web page index.html at /var/www/html. Restart Apache Web Service to bring the changes into effect. The index.html Web page will be displayed (Figure 3). Repeat the above steps for Web Server2 ApacheWebServer2.linuxrocks.org, except for the following: Set the IP address to 192.168.10.32 The contents of the custom Web page ‘index.html’ should be ‘…ApacheWebServer2…’ as shown in Figure 4.
By default, the repo file ‘rhel-source.repo’ is disabled. To enable, edit the file ‘rhel-source.repo’ and change the value enabled = 1
or enable = 0
60 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
For now you can leave this repository disabled.
How To Now, download the ‘epel-release-6-8.noarch.rpm’ package and install it.
Important notes on EPEL
1. EPEL stands for Extra Packages for Enterprise Linux. 2. EPEL is not a part of RHEL but provides a lot of open source packages for major Linux distributions. 3. EPEL packages are maintained by the Fedora team and are fully open source, with no core duplicate packages and no compatibility issues. They are to be installed using the YUM utility. The link to download the EPEL release for RHEL 6 (32-bit) is: http://download.fedoraproject.org/pub/epel/6/i386/epelrelease-6-8.noarch.rpm And for 64 bit, it is: http://download.fedoraproject.org/pub/epel/6/x86_64/epelrelease-6-8.noarch.rpm Here, epel-release-6-8.noarch.rpm is kept at /opt: Go to the /opt directory and change the permission of the files:
Admin
added repo files. No changes are made in epel.repo and epel-testing.repo. Move the default redhat.repo and rhelsource.repo to the backup location. Now, connect the server to the Internet and, using the yum utility, install Pound: [root@PoundGateway ~]# yum install Pound*
This will install Pound, Pound-debuginfo and will also install required dependencies along with it. To verify Pound’s installation, type: [root@PoundGateway ~]# rpm -qa Pound Pound-2.6-2.el6.i686 [root@PoundGateway ~]#
The location of the Pound configuration file is /etc/ pound.cfg You can view the default Pound configuration file by using the command given below’: [root@PoundGateway ~]# cat /etc/pound.cfg
As observed, epel.repo and epel-testing.repo are the new
Make the changes to the Pound configuration file as shown in the code snippet given below: We will comment the section related to “ListenHTTPS” as we do not need HTTPS for now. Add the IP address 192.168.10.30 under the ‘ListenHTTP’ section. A dd the IP address 192.168.10.31 and 192.168.10.32 with Port 80 under ‘Service Backend Section’, where [192.168.10.30] is for the Pound server; [192.168.10.31] for Web Server1 and [192.168.10.32 ] for Web Server2. The edited Pound configuration file is: [root@PoundGateway ~]# cat /etc/pound.cfg # # Default pound.cfg # # Pound listens on port 80 for HTTP and port 443 for HTTPS # and distributes requests to 2 backends running on localhost. # see pound(8) for configuration directives. # You can enable/disable backends with poundctl(8). # User "pound" Group "pound" Control "/var/lib/pound/pound.cfg" ListenHTTP Address 192.168.10.30 Port 80 End
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 61
Admin
How To To configure the service to be started at boot time, type:
Service BackEnd Address 192.168.10.31 Port 80 End BackEnd Address 192.168.10.32 Port 80 End End [root@PoundGateway ~]#
5:on
6:off
[root@PoundGateway ~]#
Observation
Now open a Web browser and access the URL http://192.168.10.30. It displays the Web page from Web Server1–ApacheWebServer1.linuxrocks.org Refresh the page, and it will display the Web page from Web Server2–ApacheWebServer2.linuxrocks.org Keep refreshing the Web page; it will flip from Web Server1 to Web Server2, back and forth. We have now configured a system where the load on the Web server is being balanced between two physical servers.
Now, start the Pound service: [root@PoundGateway ~]# service pound start Starting Pound: starting... [OK] [root@PoundGateway ~]#
You can mail us at [email protected] You can send this form to ‘The Editor’, OSFY, D-87/1, Okhla Industrial Area, Phase-1, New Delhi-20. Phone No. 011-26810601/02/03, Fax: 011-26817563
62 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
How To
Admin
Why We Need to Handle Bounced Emails Bounced emails are the bane of marketing campaigns and mailing lists. In this article, the author explains the nature of bounce messages and describes how to handle them.
W
ikipedia defines a bounce email as a systemgenerated failed delivery status notification (DSN) or a non-delivery report (NDR), which informs the original sender about a delivery problem. When that happens, the original email is said to have bounced. Broadly, bounces are categorised into two types: A hard/permanent bounce: This indicates that there exists a permanent reason for the email not to get delivered. These are valid bounces, and can be due to the non-existence of the email address, an invalid domain name (DNS lookup failure), or the email provider blacklisting the sender/recipient email address. A soft/temporary bounce: This can occur due to various reasons at the sender or recipient level. It can evolve due to a network failure, the recipient mailbox being full (quota-exceeded), the recipient having turned on a ‘vacation reply’, the local Message Transfer Agent (MTA) not responding or being badly configured, and a whole lot of other reasons. Such
bounces cannot be used to determine the status of a failing recipient, and therefore need to be sorted out effectively from our bounce processing. To understand this better, consider a sender alice@ example.com, sending an email to bob@somewhere. com. She mistyped the recipient’s address as bub@ somewhere.com. The email message will have a default envelope sender, set by the local MTA running there (mta.example.com), or by the PHP script to alice@ example.com. Now, mta.example.com looks up the DNS mx records for somewhere.com, chooses a host from that list, gets its IP address and tries to connect to the MTA running on somewhere.com, port 25 via an SMTP connection. Now, the MTA of somewhere. com is in trouble as it can't find a user receiver in its local user table. The mta.somewhere.com responds to example.com with an SMTP failure code, stating that the user lookup failed (Code: 550). It’s time for mta. example.com to generate a bounce email to the address of the return-path email header (the envelope sender), with a message that the email to alice@somewhere. com failed. That's a bounce email. Properly maintained mailing lists will have every email passing through them branded with the generic email ID, say mails@ somewhere.com as the envelope sender, and bounces to that will be wasted if left unhandled. www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 63
Admin
How To
VERP (Variable Envelope Return-Path)
In the above example, you will have noticed that the delivery failure message was sent back to the address of the Return-Path header in the original email. If there is a key to handle the bounced emails, it comes from the Return-Path header. The idea of VERP is to safely encode the recipient details, too, somehow in the return-path so that we can parse the received bounce effectively and extract the failing recipient from it. We specifically use the Return-Path header, as that’s the only header that is not going to get tampered with by the intervention of a number of MTAs. Typically, an email from Alice to Bob in the above example will have headers like the following:
attach it along with the hash. You need to edit your email headers to generate the custom return-path, and make sure you pass it as the fifth argument to the php::mail() function to tell your exim MTA to set it as the default envelope sender. $to = “[email protected]”; $from = “[email protected]”; $subject = “This is the message subject “; $body = ‘This is the message body’; /** Altering the return path */ $alteredReturnPath = self::generateVERPAddress( $to ); $headers[ ‘Return-Path’] = $alteredReturnPath; $envelopeSender= ‘ -f ‘. $alteredReturnPath;
Now, we create a custom return path header by encoding the ‘To’ address as a combination of prefix-delim-hash. The hash can be generated by the PHP hmac functions, so that the new email headers become something like what follows:
/** We need to produce a return address of the form * bounces-{ prefix }- {hash(prefix) }@sender_domain, where prefix can be * string_ replaced(to_address ) */ public generateVERPAddress( $to ) {
Now, the bounces will get directed to our new return-path and can be handled to extract the failing recipient.
Generating a VERP address
The task now is to generate a secure return-path, which is not bulky, and cannot be mimicked by an attacker. A very simple VERP address for a mail to [email protected] will be: [email protected]
Since it can be easily exploited by an attacker, we need to also include a hash generated with a secret key, along with the address. Please note that the secret key is only visible to the sender and in no way to the receiver or an attacker. Therefore, a standard VERP address will be of the form: bounces-{ prefix }-{hash(prefix,secretkey) }@sender_domain
PHP has its own hash-generating functions that can make things easier. Since PHP’s hmacs cannot be decoded, but only compared, the idea will be to adjust the recipient email ID in the prefix part of the VERP address along with its hash. On receipt, the prefix and the hash can be compared to validate the integrity of the bounce. We will string replace the ‘@’ in the recipient email ID to
Including security features is yet another concern and can be done effectively by adding the current timestamp value (in UNIX time) in the VERP prefix. This will make it easy for the bounce processor to decode the email delivery time and add additional protection by brute-forcing the hash. Decoding and comparing the value of the timestamp with the current timestamp will also help to understand how old the bounce is. Therefore, a more secure VERP address will look like what follows: bounces-{ to_address }-{ delivery_timestamp }-{ encode ( to_ address-delivery & timestamp ), secretKey }@somewhere.com
The current timestamp can be generated in PHP by: $current_timestamp = time();
There’s still work to do before the email is sent, as the local MTA at example.com may try to set its own custom
64 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
How To return-path for messages it transmits. In the example below, we adjust the exim configuration on the MTA to override this behaviour.
path to the To: header of bounce
$ sudo nano /etc/exim4/exim4.conf
$email = $_POST[ ‘email’ ];
# Do not remove Return Path header return_path_remove = false # Remove the field errors_to from the current router configuration. # This will enable exim to use the fifth param of php::mail() prefixed by -f to be set as the default # envelope sender
Every email ID will correspond to a user_id field in a standard user database, and this can be used instead of an email ID to generate a tidy and easy to look up VERP hash.
Redirect your bounces to a PHP bouncehandling script
We now have a VERP address being generated on every sent email, and it will have all the necessary information we need securely embedded in it. The remaining part of our task is to capture and validate the bounces, which would require redirecting the bounces to a processing PHP script. By default, every bounce message will reach all the way back till the MTA that sent it, say mx.example.com, as its return-path gets set to [email protected], with or without VERP. The advantage of using VERP is that we will have the encoded failing address, too, somewhere in the bounce. To get that out from the bounce, we can HTTP POST the email via curl to the bounce processing script, say localhost/handleBounce.php using an exim pipe transport, as follows: $sudo nano /etc/exim4/exim4.conf # suppose you have a recieve_all router that will accept all the emails to your domain. # this can be the system_alias router too recieve_all: driver = accept transport = pipe_transport # Edit the pipe_transport pipe_transport: driver = pipe command = /usr/bin/curl http://localhost/handleBounce..php --data-urlencode "email@-" group = nogroup return_path_add # adds Return-Path header for incoming mail. delivery_date_add envelope_to_add
# adds the bounce timestamp # copies the return
Admin
The email can be made use of in the handleBounce.php by using a simple POST request.
Decoding the failing recipient from the bounce email
Now that the mail is successfully in the PHP script, our task will be to extract the failing recipient from the encoded email headers. Thanks to exim configurations like envelope_to_add in the pipe transport (above), the VERP address gets pasted to the To header of the bounce email, and that’s the place to look for the failing recipient. Some common regex functions to extract the headers are: function extractHeaders( $email ) { $bounceHeaders = array(); $lineBreaks = explode( "\n", $email ); foreach ( $lineBreaks as $lineBreak ) { if ( preg_match( "/^To: (.*)/", $lineBreak , $toMatch ) ) { $bounceHeaders[ 'to' ] = $toMatch[1]; } if ( preg_match( "/^Subject: (.*)/", $lineBreak , $subjectMatch ) ) { $bounceHeaders[ 'subject' ] = $subjectMatch[1]; } if ( preg_match( "/^Date: (.*)/", $lineBreak , $dateMatch ) ) { $bounceHeaders[ 'date' ] = $dateMatch[1]; } if ( trim( $lineBreak ) == "" ) { // Empty line denotes that the header part is finished break; } } return $bounceHeaders; }
After extracting the headers, we need to decode the original-failed-recipient email ID from the VERP hashed $bounceHeader[‘to’], which involves more or less the reverse of what we did earlier. This would help us validate the bounced email too. /** *Considering the recieved $heders[ ‘to’ ] is of the form * bounces-{ to_address }-{ delivery_timestamp }-{ encode ( to_address-delivery & timestamp ), * secretKey }@ somewhere.com www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 65
Admin
How To
*/ $hashedTo = $headers[ ‘to’ ]’;
$hashedTo = $bounceHeaders[ ‘to’ ]; // This will hold the VERP address $failedRecipient = self::extractToAddress( $hashedTo );
$to = self::extractToAddress( $hashedTo ); function extractToAddress( $hashedTo ) { $timeNow = time(); // This will help us get the address part of address@ domain preg_match( '~(.*?)@~', $hashedTo, $hashedSlice ); // This will help us cut the address part with the symbol ‘ - ‘ $hashedAddressPart = explode( '-', $hashedSlice1] ); // Now we have the prefix in $hashedAddressPart[ 0 - 2 ] and the hash in $hashedAddressPart[3] $verpPrefix = $hashedAddressPart [0]. '-'. $hashedAddressPart 1]. '-'. hashedAddressPart [2];
// Extracting the bounce time. $bounceTime = $hashedAddressPart[ 2 ];
// Valid time for a bounce to happen. The values can be subtracted to find out the time in between and even used to set an accept time, say 3 days. if ( $bounceTime < $timeNow ) { if ( hash_hmac( $hashAlgorithm, $verpPrefix , $hashSecretKey ) === $hashedAddressPart[3] ) { // Bounce is valid, as the comparisons return true. $to = string_replace( ‘.’, ‘@’, $verpPrefix[1] ); return $to; } } }
Taking action on the failing recipient
Now that you have got the failing recipient, the task would be to record his bounce history and take relevant action. A recommended approach would be to maintain a bounce records table in the database, which would store the failed recipient, bounce-timestamp and failure reason. This can be inserted into the database on every bounce processed, and can be as simple as: /** extractHeaders is defined above */ $bounceHeaders = self::extractHeaders( $email ); $failureReason = $bounceHeaders[ ‘subject’ ]; $bounceTimestamp = $bounceHeaders[ ‘date’ ];
Simple tests to differentiate between a permanent and temporary bounce
One of the greatest challenges while writing a bounce processor is to make sure it handles only the right bounces or the permanent ones. A bounce processing script that reacts to every single bounce can lead to mass unsubscription of users from the mailing list and a lot of havoc. Exim helps us here in a great way by including an additional ‘X-Failed-Recipients:’ header to a permanent bounce email. This key can be checked for in the regex function we wrote earlier, and action can be taken only if it exists. /** * Check if the bounce corresponds to a permanent failure * can be added to the extractHeaders() function above */ function isPermanentFailure( $email ) { $lineBreaks = explode( "\n", $email ); foreach ( $lineBreaks as $lineBreak ) { if ( preg_match( "/^X-Failed-Recipients: (.*)/", $lineBreak, $permanentFailMatch ) ) { $bounceHeaders[ 'x-failed-recipients' ] = $permanentFailMatch; return true; } }
Even today, we have a number of large organisations that send more than 100 emails every minute and still have all bounces directed to /dev/null. This results in far too many emails being sent to undeliverable addresses and eventually leads to frequent blacklisting of the organisations’ mail server by popular providers like Gmail, Hotmail, etc. If bounces are directed to an IMAP maildir, the regex functions won't be necessary, as the PHP IMAP library can parse the headers readily for you. By: Tony Thomas The author is currently doing his Google SoC project for Wikimedia on handling email bounces effectively. You can contact the author at [email protected]. Github: github.com/tonythomas01
66 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
How To
Admin
Boost the Performance of CloudStack with Varnish In this article, the author demonstrates how the performance of CloudStack can dramatically improve by using Varnish. He does so by drawing upon his practical experience with administering SaaS servers at his own firm.
T
he current cloud inventory for one of the SaaS applications at our firm is as follows: • Web server: Centos 6.4 + NGINX + MySql + PHP + Drupal • Mail server: Centos 6.4 + Postfix + Dovecot + Squirrelmail A quick test on Pingdom showed a load time of 3.92 seconds for a page size of 2.9MB with 105 requests. Tests using Apache Bench ab -c1 -n500 http://www. bookingwire.co.uk/ yielded almost the same figures—a mean response time of 2.52 seconds. We wanted to improve the page load times by caching the content upstream, scaling the site to handle much greater http workloads, and implementing a failsafe mechanism. The first step was to handle all incoming http requests from anonymous users that were loading our Web server. Since anonymous users are served content that seldom changes, we wanted to prevent these requests from reaching the Web server so that its resources would be available to handle the requests from authenticated users. Varnish was our first choice to handle this. Our next concern was to find a mechanism to handle the SSL requests mainly on the sign-up pages, where we had interfaces to Paypal. Our aim was to include a second Web server that handled a portion of the load, and we wanted to configure Varnish to distribute http traffic using a round-robin mechanism between these two servers. Subsequently, we planned on configuring Varnish in such a way that even if the Web servers were down, the system would continue to serve pages. During the course of this exercise we documented our experiences and that’s what you’re reading about here.
A word about Varnish
Varnish is a Web application accelerator or ‘reverse proxy’. It’s installed in front of the Web server to handle HTTP requests. This way, it speeds up the site and improves the performance significantly. In some cases, it can improve the performance of a site by 300 to 1000 times. It does this by caching the Web pages and when visitors come to the site, Varnish serves the cached pages rather than requesting the Web server for it. Thus the load on the Web server reduces. This method improves the site’s performance and scalability. It can also act as a failsafe method if the Web server goes down because Varnish will continue to serve the cached pages in the absence of the Web server. With that said, let’s begin by installing Varnish on a VPS, and then connect it to a single NGINX Web server. Then let’s add another NGINX Web server so that we can implement a failsafe mechanism. This will accomplish the performance goals that we stated. So let’s get started. For the rest of the article, let’s assume that you are using the Centos 6.4 OS. However, we have provided information for Ubuntu users wherever we felt it was necessary.
Enable the required repositories
First enable the appropriate repositories. For Centos, Varnish is available from the EPEL repository. Add this repository to your repos list, but before you do so, you’ll need to import the GPG keys. So open a terminal and enter the following commands: [root@bookingwire sridhar]#wget https:// fedoraproject.org/static/0608B895.txt [root@bookingwire sridhar]#mv 0608B895.txt / etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 [root@bookingwire sridhar]#rpm --import / etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 [root@bookingwire sridhar]#rpm -qa gpg* gpg-pubkey-c105b9de-4e0fd3a3
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 67
Admin
How To Full Page Test
DNS Health
Ping and Traccroute
Sign up
Pingdom Website Speed Test
Enter a URL to test the load time of that page, analyze it and find bottlenecks
Test Now
bookingwire.co.uk
Tested from on May 15 at 15:29:23
Your website is faster than 41% of all tested websites Download Har
After a few seconds, Varnish will be installed. Let’s verify the installation before we go further. In the terminal, enter the following command— the output should contain the lines that follow the input command (we have reproduced only a few lines for the sake of clarity).
History
Figure 1: Pingdom result
[root@bookingwire sridhar]## yum info varnish Installed Packages Name : varnish Arch : i686 Version : 3.0.5 Release : 1.el6 Size : 1.1 M Repo : installed
That looks good; so we can be sure that Varnish is installed. Now, let’s configure Varnish to start up on boot. In case you have to restart your VPS, Varnish will be started automatically. [root@bookingwire sridhar]#chkconfig --level 345 varnish on
Figure 2: Apache Bench result
Having done that, let’s now start Varnish:
After importing the GPG keys you can enable the repository.
To verify if the new repositories have been added to the repo list, run the following command and check the output to see if the repository has been added: [root@bookingwire sridhar]#yum repolist
If you happen to use an Ubuntu VPS, then you should use the following commands to enable the repositories: [root@bookingwire sridhar]# wget http://repo.varnish-cache. org/debian/GPG-key.txt [root@bookingwire sridhar]# apt-key add GPG-key.txt [root@bookingwire sridhar]# echo “deb http://repo.varnishcache.org/ubuntu/ precise varnish-3.0” | sudo tee -a /etc/ apt/sources.list [root@bookingwire sridhar]# sudo apt-get update
Installing Varnish
Once the repositories are enabled, we can install Varnish: [root@bookingwire sridhar]# yum -y install varnish
On Ubuntu, you should run the following command:
We have now installed Varnish and it’s up and running. Let’s configure it to cache the pages from our NGINX server.
Basic Varnish configuration
The Varnish configuration file is located in /etc/sysconfig/ varnish for Centos and /etc/default/varnish for Ubuntu. Open the file in your terminal using the nano or vim text editors. Varnish provides us three ways of configuring it. We prefer Option 3. So for our 2GB server, the configuration steps are as shown below (the lines with comments have been stripped off for the sake of clarity): NFILES=131072 MEMLOCK=82000 RELOAD_VCL=1 VARNISH_VCL_CONF=/etc/varnish/default.vcl VARNISH_LISTEN_PORT=80 , :443 VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 VARNISH_ADMIN_LISTEN_PORT=6082 VARNISH_SECRET_FILE=/etc/varnish/secret VARNISH_MIN_THREADS=50 VARNISH_MAX_THREADS=1000 VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin VARNISH_STORAGE_SIZE=1G VARNISH_STORAGE=”malloc,${VARNISH_STORAGE_SIZE}” VARNISH_TTL=120 DAEMON_OPTS=”-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_
68 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
The first line when substituted with the variables will read -a :80,:443 and instruct Varnish to serve all requests made on Ports 80 and 443. We want Varnish to serve all http and https requests. To set the thread pools, first determine the number of CPU cores that your VPS uses and then update the directives. [root@bookingwire sridhar]# grep processor /proc/cpuinfo processor : 0 processor : 1
This means you have two cores. The formula to use is: -p thread_pools= \ -p thread_pool_min=<800 / Number of CPU cores> \
The -s ${VARNISH_STORAGE} translates to -s malloc,1G” after variable substitution and is the most important directive. This allocates 1GB of RAM for exclusive use by Varnish. You could also specify -s file,/ var/lib/varnish/varnish_storage.bin,10G” which tells Varnish to use the file caching mechanism on the disk and that 10GB has been allocated to it. Our suggestion is that you should use the RAM.
Configure the default.vcl file
The default.vcl file is where you will have to make most of the configuration changes in order to tell Varnish about your Web servers, assets that shouldn’t be cached, etc. Open the default.vcl file in your favourite editor: [root@bookingwire sridhar]# nano /etc/varnish/default.vcl
Since we expect to have two NGINX servers running our application, we want Varnish to distribute the http requests between these two servers. If, for any reason, one
Admin
of the servers fails, then all requests should be routed to the healthy server. To do this, add the following to your default. vcl file: backend bw1 { .host = “146.185.129.131”; .probe = { .url = “/google0ccdbf1e9571f6ef. html”; .interval = 5s; .timeout = 1s; .window = 5; .threshold = 3; }} backend bw2 { .host = “37.139.24.12”; .probe = { .url = “/google0ccdbf1e9571f6ef. html”; .interval = 5s; .timeout = 1s; .window = 5; .threshold = 3; }} backend bw1ssl { .host = “146.185.129.131”; .port = “443”; .probe = { .url = “/google0ccdbf1e9571f6ef. html”; .interval = 5s; .timeout = 1s; .window = 5; .threshold = 3; }} backend bw2ssl { .host = “37.139.24.12”; .port = “443”; .probe = { .url = “/google0ccdbf1e9571f6ef. html”; .interval = 5s; .timeout = 1s; .window = 5; .threshold = 3; }} director default_director round-robin { { .backend = bw1; } { .backend = bw2; } } director ssl_director round-robin { { .backend = bw1ssl; } { .backend = bw2ssl; } } sub vcl_recv { if (server.port == 443) { set req.backend = ssl_director; } else { set req.backend = default_director; } }
You might have noticed that we have used public IP www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 69
Admin
How To
addresses since we had not enabled private networking within our servers. You should define the ‘backends’ —one each for the type of traffic you want to handle. Hence, we have one set to handle http requests and another to handle the https requests. It’s a good practice to perform a health check to see if the NGINX Web servers are up. In our case, we kept it simple by checking if the Google webmaster file was present in the document root. If it isn’t present, then Varnish will not include the Web server in the round robin league and won’t redirect traffic to it. .probe = { .url = “/google0ccdbf1e9571f6ef.html”;
The above command checks the existence of this file at each backend. You can use this to take an NGINX server out intentionally either to update the version of the application or to run scheduled maintenance checks. All you have to do is to rename this file so that the check fails! In spite of our best efforts to keep our servers sterile, there are a number of reasons that can cause a server to go down. Two weeks back, we had one of our servers go down, taking more than a dozen sites with it because the master boot record of Centos was corrupted. In such cases, Varnish can handle the incoming requests even if your Web server is down. The NGINX Web server sets an expires header (HTTP 1.0) and the max-age (HTTP 1.1) for each page that it serves. If set, the max-age takes precedence over the expires header. Varnish is designed to request the backend Web servers for new content every time the content in its cache goes stale. However, in a scenario like the one we faced, it’s impossible for Varnish to obtain fresh content. In this case, setting the ‘Grace’ in the configuration file allows Varnish to serve content (stale) even if the Web server is down. To have Varnish serve the (stale) content, add the following lines to your default.vcl: sub vcl_recv { set req.grace = 6h; } sub vcl_fetch { set beresp.grace = 6h; } if (!req.backend.healthy) { unset req.http.Cookie; }
The last segment tells Varnish to strip all cookies for an authenticated user and serve an anonymous version of the page if all the NGINX backends are down. Most browsers support encoding but report it differently. NGINX sets the encoding as Vary: Cookie, Accept-Encoding.
If you don’t handle this, Varnish will cache the same page once each, for each type of encoding, thus wasting server resources. In our case, it would gobble up memory. So add the following commands to the vcl_recv to have Varnish cache the content only once: if (req.http.Accept-Encoding) { if (req.http.Accept-Encoding ~ “gzip”) { # If the browser supports it, we’ll use gzip. set req.http.Accept-Encoding = “gzip”; } else if (req.http.Accept-Encoding ~ “deflate”) { # Next, try deflate if it is supported. set req.http.Accept-Encoding = “deflate”; } else { # Unknown algorithm. Remove it and send unencoded. unset req.http.Accept-Encoding; }
} Now, restart Varnish. [root@bookingwire sridhar]# service varnish restart
Additional configuration for content management systems, especially Drupal A CMS like Drupal throws up additional challenges when configuring the VCL file. We’ll need to include additional directives to handle the various quirks. You can modify the directives below to suit the CMS that you are using. When using CMSs like Drupal if there are files that you don’t want cached for some reason, add the following commands to your default.vcl file in the vcl_recv section: if (req.url ~ “^/status\.php$” || req.url ~ “^/update\.php$” || req.url ~ “^/ooyala/ping$” || req.url ~ “^/admin/build/features” || req.url ~ “^/info/.*$” || req.url ~ “^/flag/.*$” || req.url ~ “^.*/ajax/.*$” || req.url ~ “^.*/ahah/.*$”) { return (pass); }
Varnish sends the length of the content (see the Varnish log output above) so that browsers can display the progress bar. However, in some cases when Varnish is unable to tell the browser the specified content-length (like streaming audio) you will have to pass the request directly to the Web server. To do this, add the following command to your default.vcl:
70 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
How To if (req.url ~ “^/content/music/$”) { return (pipe); }
Drupal has certain files that shouldn’t be accessible to the outside world, e.g., Cron.php or Install.php. However, you should be able to access these files from a set of IPs that your development team uses. At the top of default.vcl include the following by replacing the IP address block with that of your own: acl internal { “192.168.1.38”/46; }
Now to prevent the outside world from accessing these pages we’ll throw an error. So inside of the vcl_recv function include the following:
Admin
you should track down the cookie and update the regex above to strip it. Once you have done that, head to /admin/config/ development/performance, enable the Page Cache setting and set a non-zero time for ‘Expiration of cached pages’. Then update the settings.php with the following snippet by replacing the IP address with that of your machine running Varnish. $conf[‘reverse_proxy’] = TRUE; $conf[‘reverse_proxy_addresses’] = array(‘37.139.8.42’); $conf[‘page_cache_invoke_hooks’] = FALSE; $conf[‘cache’] = 1; $conf[‘cache_lifetime’] = 0; $conf[‘page_cache_maximum_age’] = 21600;
You can install the Drupal varnish module (http://www. drupal.org/project/varnish), which provides better integration with Varnish and include the following lines in your settings.php:
if (req.url ~ “^/(cron|install)\.php$” && !client.ip ~ internal) { error 404 “Page not found.”; }
If you prefer to redirect to an error page, then use this instead:
Checking if Varnish is running and serving requests
if (req.url ~ “^/(cron|install)\.php$” && !client.ip ~ internal) { set req.url = “/404”; }
Our approach is to cache all assets like images, JavaScript and CSS for both anonymous and authenticated users. So include this snippet inside vcl_recv to unset the cookie set by Drupal for these assets: if (req.url ~ “(?i)\.(png|gif|jpeg|jpg|ico|swf|css|js|html| htm)(\?[a-z0-9]+)?$”) { unset req.http.Cookie; }
Drupal throws up a challenge especially when you have enabled several contributed modules. These modules set cookies, thus preventing Varnish from caching assets. Google analytics, a very popular module, sets a cookie. To remove this, include the following in your default.vcl: set req.http.Cookie = regsuball(req.http.Cookie, “(^|;\s*) (__[a-z]+|has_js)=[^;]*
If there are other modules that set JavaScript cookies, then Varnish will cease to cache those pages; in which case,
Instead of logging to a normal log file, Varnish logs to a shared memory segment. Run varnishlog from the command line, access your IP address/ URL from the browser and view the Varnish messages. It is not uncommon to see a ‘503 service unavailable’ message. This means that Varnish is unable to connect to NGINX. In which case, you will see an error line in the log (only the relevant portion of the log is reproduced for clarity). [root@bookingwire sridhar]# Varnishlog 12 StatSess c 122.164.232.107 34869 0 1 0 0 0 0 0 0 12 SessionOpen c 122.164.232.107 34870 :80 12 ReqStart c 122.164.232.107 34870 1343640981 12 RxRequest c GET 12 RxURL c / 12 RxProtocol c HTTP/1.1 12 RxHeader c Host: 37.139.8.42 12 RxHeader c User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:27.0) Gecko/20100101 Firefox/27.0 12 RxHeader c Accept: text/html,application/ xhtml+xml,application/xml;q=0.9,*/*;q=0.8 12 RxHeader c Accept-Language: en-US,en;q=0.5 12 RxHeader c Accept-Encoding: gzip, deflate 12 RxHeader c Referer: http://37.139.8.42/ 12 RxHeader c Cookie: __zlcmid=OAdeVVXMB32GuW 12 RxHeader c Connection: keep-alive 12 FetchError c no backend connection
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 71
Admin
How To Check if Varnish is serving pages
Visit http://www.isvarnishworking.com/, provide your URL/ IP address and you should see your Gold Star! (See Figure 3.) If you don’t, but instead see other messages, it means that Varnish is running but not caching. Then you should look at your code and ensure that it sends the appropriate headers. If you are using a content management system, particularly Drupal, you can check the additional parameters in the VCL file and set them correctly. You have to enable caching in the performance page.
Running the tests
Running Pingdom tests showed improved response times of 2.14 seconds. If you noticed, there was an improvement in the response time in spite of having the payload of the page increasing from 2.9MB to 4.1MB. If you are wondering why it increased, remember, we switched the site to a new theme. Apache Bench reports better figures at 744.722 ms.
Configuring client IP forwarding
Figure 3: Varnish status result
Full Page Test
DNS Health
Ping and Traccroute
Sign up
Pingdom Website Speed Test
Enter a URL to test the load time of that page, analyze it and find bottlenecks
Test Now
37.139.8.42
Your website is faster than 68% of all tested websites Download Har
Tweet
Post to Timeline
Email
Figure 4: Pingdom test result after configuring Varnish 12 12 12 12 12 12 12 12 12 12 12 12 12 12
Check the IP address for each request in the access logs of your Web servers. For NGINX, the access logs are available at /var/log/nginx and for Apache, they are available at /var/ log/httpd or /var/log/apache2, depending on whether you are running Centos or Ubuntu. It’s not surprising to see the same IP address (of the Varnish machine) for each request. Such a configuration will throw all Web analytics out of gear. However, there is a way out. If you run NGINX, try out the following procedure. Determine the NGINX configuration that you currently run by executing the command below in your command line: [root@bookingwire sridhar]# nginx -V
Look for the –with-http_realip_module. If this is available, add the following to your NGINX configuration file in the http section. Remember to replace the IP address with that of your Varnish machine. If Varnish and NGINX run on the same machine, do not make any changes. set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For;
Restart NGINX and check the logs once again. You will see the client IP addresses. If you are using Drupal then include the following line in settings.php: $conf[‘reverse_proxy_header’] = ‘HTTP_X_FORWARDED_FOR’;
Resolve the error and you should have Varnish running. But that isn’t enough—we should check if it’s caching the pages. Fortunately, the folks at the following URL have made it simple for us.
Other Varnish tools
Varnish includes several tools to help you as an administrator. varnishstat -1 -f n_lru_nuked: This shows the number of
72 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
How To
Admin
If you have the Remi repo enabled and the Varnish cache repo enabled, install them by specifying the defined repository. Yum install varnish –enablerepo=epel Yum install varnish –enablerepo=varnish-3.0
Figure 5: Apache Bench result after configuring Varnish
objects nuked from the cache. Varnishtop: This reads the logs and displays the most frequently accessed URLs. With a number of optional flags, it can display a lot more information. Varnishhist: Reads the shared memory logs, and displays a histogram showing the distribution of the last N requests on the basis of their processing. Varnishadm: A command line utility for Varnish. Varnishstat: Displays the statistics.
Dealing with SSL: SSL-offloader, SSLaccelerator and SSL-terminator
SSL termination is probably the most misunderstood term in the whole mix. The mechanism of SSL termination is employed in situations where the Web traffic is heavy. Administrators usually have a proxy to handle SSL requests before they hit Varnish. The SSL requests are decrypted and the unencrypted requests are passed to the Web servers. This is employed to reduce the load on the Web servers by moving the decryption and other cryptographic processing upstream. Since Varnish by itself does not process or understand SSL, administrators employ additional mechanisms to terminate SSL requests before they reach Varnish. Pound (http://www.apsis.ch/pound) and Stud (https://github. com/bumptech/stud) are reverse proxies that handle SSL termination. Stunnel (https://www.stunnel.org/) is a program that acts as a wrapper that can be deployed in front of Varnish. Alternatively, you could also use another NGINX in front of Varnish to terminate SSL. However, in our case, since only the sign-in pages required SSL connections, we let Varnish pass all SSL requests to our backend Web server.
Additional repositories
There are other repositories from where you can get the latest release of Varnish: wget repo.varnish-cache.org/redhat/varnish-3.0/el6/noarch/ varnish-release/varnish-release-3.0-1.el6.noarch.rpm rpm –nosignature -i varnish-release-3.0-1.el6.noarch.rpm
Our experience has been that Varnish reduces the number of requests sent to the NGINX server by caching assets, thus improving page response times. It also acts as a failover mechanism if the Web server fails. We had over 55 JavaScript files (two as part of the theme and the others as part of the modules) in Drupal and we aggregated JavaScript by setting the flag in the Performance page. We found a 50 per cent drop in the number of requests; however, we found that some of the JavaScript files were not loaded on a few pages and had to disable the aggregation. This is something we are investigating. Our recommendation is not to choose the aggregate JavaScript files in your Drupal CMS. Instead, use the Varnish module (https://drupal.org/ project/varnish). The module allows you to set long object lifetimes (Drupal doesn’t set it beyond 24 hours), and use Drupal’s existing cache expiration logic to dynamically purge Varnish when things change. You can scale this architecture to handle higher loads either vertically or horizontally. For vertical scaling, resize your VPS to include additional memory and make that available to Varnish using the -s directive. To scale horizontally, i.e., to distribute the requests between several machines, you could add additional Web servers and update the round robin directives in the VCL file. You can take it a bit further by including HAProxy right upstream and have HAProxy route requests to Varnish, which then serves the content or passes it downstream to NGINX. To remove a Web server from the round robin league, you can improve upon the example that we have mentioned by writing a small PHP snippet to automatically shut down or exit() if some checks fail.
By: Sridhar Pandurangiah The author is the co-founder and director of Sastra Technologies, a start-up engaged in providing EDI solutions on the cloud. He can be contacted at sridhar@sastratechnologies. in /[email protected]. He maintains a technical blog at sridharpandu.wordpress.com
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 73
Admin
How To
Use Wireshark to Detect ARP Spoofing The first two articles in the series on Wireshark, which appeared in the July and August 2014 issues of OSFY, covered a few simple protocols and various methods to capture traffic in a ‘switched’ environment. This article describes an attack called ARP spoofing and explains how you could use Wireshark to capture it.
I
magine an old Hindi movie where the villain and his subordinate are conversing over the telephone, and the hero intercepts this call to listen in on their conversation – a perfect ‘man in the middle’ (MITM) scenario. Now extend this to the network, where an attacker intercepts communication between two computers. Here are two possibilities with respect to what an attacker can do to intercepted traffic: 1. Passive attacks (also called eavesdropping or only listening to the traffic): These can reveal sensitive information such as clear text (unencrypted) login IDs and passwords. 2. Active attacks: These modify the traffic and can be used for various types of attacks such as replay, spoofing, etc. An MITM attack can be launched against cryptographic systems, networks, etc. In this article, we will limit our discussions to MITM attacks that use ARP spoofing.
ARP spoofing
Joseph Goebbels, Nazi Germany’s minister for propaganda, famously said, “If you tell a lie big enough and keep repeating it, people will eventually come to believe it. The lie can be maintained only for such time as the state can shield the people from the political, economic and/or military
consequences of the lie. It thus becomes vitally important for the state to use all of its powers to repress dissent, for the truth is the mortal enemy of the lie, and thus by extension, the truth is the greatest enemy of the state.” So let us interpret this quote by a leader of the infamous Nazi regime from the perspective of the ARP protocol: If you repeatedly tell a device who a particular MAC address belongs to, the device will eventually believe you, even if this is not true. Further, the device will remember this MAC address only as long as you keep telling the device about it. Thus, not securing an ARP cache is dangerous to network security. Note: From the network security professional’s view, it becomes absolutely necessary to monitor ARP traffic continuously and limit it to below a threshold. Many managed switches and routers can be configured to monitor and control ARP traffic below a threshold. An MITM attack is easy to understand using this context. Attackers trying to listen to traffic between any two devices, say a victim’s computer system and a router, will launch an ARP spoofing attack by sending unsolicited (what this means is an ARP reply packet sent out without receiving
74 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
How To
Admin
Figure 3: Wireshark capture on the attacker’s PC–ARP packets
Figure 1: Ettercap menus
Figure 4: Wireshark capture on the attacker’s PC–sniffed packets sniffed from the victim’s PC and router
The tool has command line options, but its GUI is easier and can be started by using: ettercap -G
Figure 2: Successful ARP poisoning
an ARP request) ARP reply packets with the following source addresses: Towards the victim’s computer system: Router IP address and attacker's PC MAC address; Towards the router: Victim’s computer IP address and attacker’s PC MAC address. After receiving such packets continuously, due to ARP protocol characteristics, the ARP cache of the router and the victim’s PC will be poisoned as follows: Router: The MAC address of the attacker’s PC registered against the IP address of the victim; Victim’s PC: The MAC address of the attacker’s PC registered against the IP address of the router.
The Ettercap tool
ARP spoofing is the most common type of MITM attack, and can be launched using the Ettercap tool available under Linux (http://ettercap.github.io/ettercap/downloads.html). A few sites claim to have Windows executables. I have never tested these, though. You may install the tool on any Linux distro, or use distros such as Kali Linux, which has it bundled.
Launch the MITM ARP spoofing attack by using Ettercap menus (Figure 1) in the following sequence (words in italics indicate Ettercap menus): Sniff is unified sniffing and selects the interface to be sniffed (for example, eth0 for a wired network). Hosts scans for hosts. It scans for all active IP addresses in the eth0 network. The hosts list displays the list of scanned hosts. The required hosts are added to Target1 and Target2. An ARP spoofing attack will be performed so as to read traffic between all hosts selected under Target1 and Target2. Targets gives the current targets. It verifies selection of the correct targets. MITM – ARP poisoning: ‘Sniff remote connections’ will start the attack. The success of the attack can be confirmed as follows: In the router, check ARP cache (for a CISCO router, the command is show ip arp). In the victim PC, use the ARP -a command. Figure 2 gives the output of the command before and after a successful ARP spoofing attack. The attacker PC captures traffic using Wireshark to check unsolicited ARP replies. Once the attack is successful, the traffic between two targets will also be captured. Be careful–if traffic from the victim’s PC contains clear text authentication packets, the credentials could be revealed. Note that Wireshark gives information such as ‘Duplicate
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 75
Admin
How To
use of IP is detected’ under the ‘Info’ column once the attack is successful. Here is how the actual packet travels and is captured after a successful ARP poisoning attack: When the packet from the victim PC starts for the router, at Layer 2, the poisoned MAC address of the attacker (instead of the original router MAC) is inserted as the target MAC; thus the packet reaches the attacker’s PC. The attacker sees this packet and forwards the same to the router with the correct MAC address. Figure 5: Wireshark’s capture filter The reply from the router is logically sent towards the spoofed destination MAC address of the attacker’s system (rather than the victim’s PC). It is captured and forwarded by the attacker to the victim’s PC. In between, the sniffer software, Wireshark, which is running on the attacker’s PC, reads this traffic. Here are various ways to prevent ARP spoof attacks: Monitor ‘arpwatch’ logs on Linux Use static ARP commands on Windows and Ubuntu as follows: • Windows: arp-s DeviceIP DeviceMAC • Ubuntu: arp -i eth0 -s DeviceIP DeviceMAC Control ARP packets on managed switches
Can MITM ARP spoofing be put to fruitful use? Definitely! Consider capturing packets from a system suspected of malware (virus) infection in a switched environment. There are two ways to do this—use a wiretap or MITM ARP spoofing. Sometimes, you may not have a wiretap handy or may not want the system to go offline even for the time required to connect the wiretap. Here, MITM ARP spoofing will definitely serve the purpose.
Note: This attack is specifically targeted towards OSI Layer 2–a data link layer; thus, it can be executed only from within your network. Be assured, this attack cannot be used sitting outside the local network to sniff packets between your computer and your bank’s Web server – the attacker must be within the local network.
No broadcast and no Multicast No ARP IP only IP address 192.168.0.1 IPX only TCP only UDP only
Packets captured using the test scenarios described in this series of articles are capable of revealing sensitive information such as login names and passwords. Using ARP spoofing, in particular, will disturb the network temporarily. Make sure to use these techniques only in a test environment. If at all you wish to use them in a live environment, do not forget to avail explicit written permission before doing so.
the previous article. But, in a busy network, capturing all traffic and using display filters to see only the desired traffic may require a lot of effort. Wireshark’s capture filters provide a way out. In the beginning, before selecting the interface, you can click on Capture Options and use capture filters to capture only the desired traffic. Click on the Capture filter button to see various filters, such as ARP, No ARP, TCP only, UDP only, traffic from specific IP addresses, and so on. Select the desired filter and Wireshark will capture only the defined traffic. For example, MITM ARP spoofing can be captured using the ARP filter from Capture filters instead of ‘Display filtering’ the entire captured traffic. Keep a watch on this column for exciting Wireshark features! By: Rajesh Deodhar
Before we conclude, let us understand an important Wireshark feature called capture filters. We did go through the basics of display filters in 76 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
The author has been an IS auditor and network security consultant-trainer for the last two decades. He is a BE in Industrial Electronics, and holds CISA, CISSP, CCNA and DCL certifications. Please feel free to contact him at [email protected]
Insight
Admin
Make Your Own PBX with Asterisk
This article, the first of a multi-part series, familiarises readers with Asterisk, which is a software implementation of a private branch exchange (PBX).
A
sterisk is a revolutionary open source platform started by Mark Spencer, and has shaken up the telecom world. This series is meant to familiarise you with it, and educate you enough to be a part of it in order to enjoy its many benefits. If you are a technology freak, you will be able to make your own PBX for your office or home after going through this series. As a middle level manager, you will be able to guide a techie to do the job, while senior level managers with a good appreciation of the technology and minimal costs involved would be in a position to direct somebody to set up an Asterisk PBX. If you are an entrepreneur, you can adopt one of the many business models with Asterisk. As you will see, it is worthwhile to at least evaluate the option.
History
In 1999, Mark Spencer of Digium fame started a Linux technical support company with US$ 4000. Initially, he had to be very frugal; so buying one of those expensive PBXs was unthinkable. Instead, he started programming a PBX for
his requirements. Later, he published the software as open source and a lot of others joined the community to further develop the software. The rest is history.
The statistics
Today, Asterisk claims to have 2 million downloads every year, and is running on over 1 million servers, with 1.3 million new endpoints created annually. A 2012 statistic by Eastern Management claims that 18 per cent of all PBX lines in North America are open source-based and the majority of them are on Asterisk. Indian companies have also started adopting Asterisk since a few years. The initial thrust was for international call centres. A large majority of the smaller call centres (50-100 seater) use ‘Vicidial', another open source application based on Asterisk. IP PBX penetration in the Indian market is not very high due to certain regulatory misinterpretations. Anyhow, this unclear environment is gradually getting clarity, and very soon, we will see an astronomic growth of Asterisk in the Indian market.
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 77
Admin
Insight
The call centre boom also led to the development of the Asterisk ecosystem comprising Asteriskbased product companies, software supporters, hardware resellers, etc, across India. This presents a huge opportunity for entrepreneurs.
Some terminology
Mark Spencer, Before starting, I would like to founder of Asterisk introduce some basic terms for the benefit of readers who are novices in this field. Let us start with the PBX or private branch exchange, which is the heart of all corporate communication. All the telephones seen in an office environment are connected to the PBX, which in turn connects you to the outside world. The internal telephones are called subscribers and the external lines are called trunk lines. The trunk lines connect the PBX to the outside world or the PSTN (Public Switched Telephony Network). Analogue trunks (FXO–Foreign eXchange Office) are based on very old analogue technology, which is still in use in our homes and in some companies. Digital trunk technology or ISDN (Integrated Services Digital Network) evolved in the 80s with mainly two types of connections – BRI (Basic Rate Interface) for SOHO (small office/ home office) use, and PRI (Primary Rate Interface) for corporate use. In India, analogue trunks are used for SOHO trunking, but BRI is no longer used at all. Anyhow, PRI is quite popular among companies. IP/SIP (Internet Protocol/Session Initiation Protocol) trunking has been used by international call centres for quite some time. Now, many private providers like Tata Telecom have started offering SIP trunking for domestic calls also. The option of GSM trunking through a GSM gateway using SIM cards is also quite popular, due to the flexibility offered in costs, prepaid options and network availability. The users connected to the PBX are called subscribers. Analogue telephones (FXS – Foreign eXchange Subscriber) are still very commonly used and are the cheapest. As Asterisk is an IP PBX, we need a VoIP FXS gateway to convert the IP signals to analogue signals. Asterisk supports IP telephones, mainly using SIP. Nowadays, Wi-Fi clients are available even for smartphones, which enable the latter to work like extensions. These clients bring in a revolutionary transformation to the telephony landscape–analogous to paperless offices and telephone-less desks. The same smartphone used to make calls over GSM networks becomes a dual-purpose phone– also working like a desk extension. Just for a minute, consider the limitless possibilities enabled by this new transformed extension phone. Extension roaming: Employees can roam about
anywhere in the office—participate in a conference, visit a colleague, doctors can visit their in-patients—and yet receive calls as if they were seated at their desks. External extensions: The employees could be at home, at a friend's house, or even out making a purchase, and still receive the same calls, as if at their desks. Increased call accountability: Calls can be recorded and monitored for quality or security purposes at the PBX. Lower telephone costs: The volume of calls passing through the PBX makes it possible to negotiate with the service provider for better rates. The advantages that a roaming extension brings are many, which we will explore in more detail in subsequent editions. Let us look into the basics of Asterisk. “Asterisk is like a box of Lego blocks for people who want to create communications applications. It includes all the building blocks needed to create a PBX, an IVR system, a conference bridge and virtually any other communications app you can imagine,” says an excerpt from asterisk.org. Asterisk is actually a piece of software. In very simple and generic terms, the following are the steps required to create an application based on it: 1. Procure standard hardware. 2. Install Linux. 3. Download Asterisk software. 4. Install Asterisk. 5. Configure it. 6. Procure hardware interfaces for the trunk line and configure them. 7. Procure hardware for subscribers and configure them. 8. You’re then ready to make your calls. Procure a standard desktop or server hardware, based on Pentium, Xeon, i3, etc. RAM is an important factor, and could be 2GB, 4GB or 8GB. These two factors decide the number of concurrent calls. Hard disk capacity of 500GB or 1TB is mainly for space to store voice files for VoiceMail or VoiceLogger. The hard disk’s speed also influences the concurrent calls. The next step is to choose a suitable OS—Fedora, Debian, CentOS or Ubuntu are well suited for this purpose. After this, Asterisk software may be downloaded from www.asterisk.org/downloads/. Either the newest LTS (Long Term Support) release or the latest standard version can be downloaded. LTS versions are released once in four years. They are more stable, but have fewer features than the standard version, which is released once a year. Once the software is downloaded, the installation may be carried out as per the instructions provided. We'll go into the details of the installation in later sessions. The download page also offers the option to download AsteriskNow, which is an ISO image of Linux, Asterisk and FreePBX GUI. If you prefer a very quick and simple installation without much flexibility, you may choose this variant.
78 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Insight After the installation, one needs to create the trunks, users and set up some more features to be able to start using the system. The administrators can make these configurations directly into the dial plan, or there are GUIs like FreePBX, which enable easy administration. Depending on the type of trunk chosen, we need to procure hardware. If we are connecting a normal analogue line, an FXO card with one port needs to be procured, in PCI or PCIe format, depending on the slots available on the server. After inserting the card, it has to be configured. Similarly, if you have to connect analogue phones, you need to procure FXS gateways. IP phones can be directly connected to the system over the LAN. Exploring the PBX further, you will be astonished by the power of Asterisk. It comes with a built in voice logger, which can be customised to record either all calls or those from selective people. In most proprietary PBXs, this would have been an additional component. Asterisk not only provides a voice mail box, but also has the option to convert the voice mail to an attachment that can be sent to you as an email. The Asterisk IVR is very powerful; it has multiple levels, digit collection, database and Web-service integration, and speech recognition.
Admin
There are also lots of applications based on Asterisk like Vicidial, which is a call-centre suite for inbound and outbound dialling. For the latter, one can configure campaigns with lists of numbers, dial these numbers in predictive dialling mode and connect to the agents. Similarly, inbound dialling can also be configured with multiple agents, and the calls routed based on multiple criteria like the region, skills, etc. Asterisk also easily integrates with multiple enterprise applications (like CRM and ERP) over CTI (computer telephony interfaces) like TAPI (Telephony API) or by using simple URL integration. O'Reilly has a book titled ‘Asterisk: The future of telephony', which can be downloaded. I would like to take you through the power of Asterisk in subsequent issues, so that you and your network can benefit from this remarkable product, which is expected to change the telephony landscape of the future. By: Devasia Kurian The author is the founder and CEO of *astTECS.
Please share your feedback/ thoughts/ views via email at [email protected]
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 79
Open Gurus
How To
How to Make Your USB Boot with Multiple ISOs
This DIY article is for systems admins and software hobbyists, and teaches them how to create a bootable USB that is loaded with multiple ISOs.
S
ystems administrators and other Linux enthusiasts use multiple CDs or DVDs to boot and install operating systems on their PCs. But it is somewhat difficult and costly to maintain one CD or DVD for each OS (ISO image file) and to carry around all these optical disks; so, let’s look at the alternative—a multi-boot USB. The Internet provides so many ways (in Windows and in Linux) to convert a USB drive into a bootable USB. In real time, one can create a bootable USB that contains a single OS. So, if you want to change the OS (ISO image), you have to format the USB. To avoid formatting the USB each time the ISO is changed, use Easy2Boot. In my case, the RMPrepUSB website saved me from unnecessarily formatting the USB drive by introducing the Easy2Boot option. Easy2Boot is open source - it consists of plain text batch files and open source grub4dos utilities. It has no proprietary software.
Fat32 (0x0c). You can choose ext2/ext3 file systems also, but they will not load some OSs. So, Fat32 is the best choice for most of the ISOs. Now download the grub4dos-0.4.5c (not grub4dos0.4.6a) from https://code.google.com/p/grub4dos-chenall/ downloads/list and extract it on the desktop. Next, install the grub4dos on the MBR with a zero second time-out on your USB stick, by typing the following command at the terminal:
Making the USB drive bootable
Copying Easy2Boot files to USB
To make your USB bootable, just connect it to your Linux system. Open the disk utility or gparted tool and format it as
Note: You can change the path to your grub4dos folder. sdb is your USB and can be checked by the df command in a terminal or by using the gparted or disk utility tools.
Your pen drive is ready to boot, but we need menu files, which are necessary to detect the .ISO files in your USB.
80 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
How To
Open Gurus
defragfs-1.1.1.gz file from http://defragfs.sourceforge.net/ download.html and extract it to the desktop. Now run the following command at the terminal: sudo umount /dev/sdb1
sdb1 is the partition on my USB which has the E2B files.
Figure 1: Folders for different OSs
sudo mkdir ~/Desktop/usb && sudo mount /dev/sdb1 ~/Desktop/ usb sudo perl ~/Desktop/defragfs ~/Desktop/usb -f
That’s it. Your USB drive is ready with a number of ISO files to boot on any system. Just run the defragfs command every time you modify (add or remove) the ISO files in the USB to make all the files in the drive contiguous.
Using the QEMU emulator for testing
Figure 2: Easy2Boot OS selection menu
After completing the final stage, test how well your USB boots with lots of .ISO load on it, using the QEMU tool. Alternatively, you can choose any of the virtualisation tools like Virtual Box or VMware. We used QEMU (it is easy but somewhat slow) in our Linux machine by typing the following command at the terminal: sudo qemu –m 512M /dev/sdb
Note: The loading of every .ISO file in the corresponding folder is based only on the .mnu file for that .ISO. So, by creating your own .mnu file you can add your own flavour to the USB menu list. For further details and help regarding .mnu file creation, just visit http://www.rmprepusb.com/tutorials.
Figure 3: Ubuntu boot menu
The menu (.mnu) files and other boot-related files can be downloaded from the Easy2boot website. Extract the Easy2Boot file to your USB drive and you can observe the different folders that are related to different operating systems and applications. Now, just place the corresponding .ISO file in the corresponding folder. For example, all the Linux-related .ISO files should be placed in the Linux folder, all the backup-Linux related files should be placed in the corresponding folder, utilities should be placed in the utilities folder, and so on. Your USB drive is now ready to be loaded with any (almost all) Linux image files, backup utilities and some other Windows related .ISOs without formatting it. After placing your required image files, either installation ISOs or live ISOs, you need to defragment the folders in the USB drive. To defrag your USB drive, download the
Your USB will boot and the Easy2Boot OS selection menu will appear. Choose the OS you want, which is placed under the corresponding folder. You can use your USB in real time, and can add or remove the .ISOs in the corresponding folders simply by copy-pasting. You can use the same USB for copying documents and other files by making all the files that belong to Easy2Boot contiguous. References [1] http://www.rmprepusb.com/tutorials [2] https://code.google.com/p/grub4dos-chenall/downloads/list [3] http://www.easy2boot.com/download/ [4] http://defragfs.sourceforge.net/download.html
By: Gaali Mahesh and Nagaram Suresh Kumar The authors are assistant professors at VNITSW (Vignan’s Nirula Institute of Technology and Science for Women, Andhra Pradesh). They blog at surkur.blogspot.in, where they share some tech tricks and their practical experiences with open source. You can reach them at [email protected] and [email protected].
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 81
Open Gurus
Let's Try
How to Cross Compile the Linux Kernel with Device Tree Support This article is intended for those who would like to experiment with the many embedded boards in the market but do not have access to them for one reason or the other. With the QEMU emulator, DIY enthusiasts can experiment to their heart’s content.
Y
ou may have heard of the many embedded target boards available today, like the BeagleBoard, Raspberry Pi, BeagleBone, PandaBoard, Cubieboard, Wandboard, etc. But once you decide to start development for them, the right hardware with all the peripherals may not be available. The solution to starting development on embedded Linux for ARM is by emulating hardware with QEMU, which can be done easily without the need for any hardware. There are no risks involved, too. QEMU is an open source emulator that can emulate the execution of a whole machine with a full-fledged OS running. QEMU supports various architectures, CPUs and target boards. To start with, let’s emulate the Versatile Express Board as a reference, since it is simple and well supported by recent kernel versions. This board comes with the Cortex-A9 (ARMv7) based CPU. In this article, I would like to mention the process of cross compiling the Linux kernel for ARM architecture with device tree support. It is focused on covering the entire process of working—from boot loader to file system with SD card support. As this process is almost similar
to working with most target boards, you can apply these techniques on other boards too.
Device tree
Flattened Device Tree (FDT) is a data structure that describes hardware initiatives from open firmware. The device tree perspective kernel no longer contains the hardware description, which is located in a separate binary called the device tree blob (dtb) file. So, one compiled kernel can support various hardware configurations within a wider architecture family. For example, the same kernel built for the OMAP family can work with various targets like the BeagleBoard, BeagleBone, PandaBoard, etc, with dtb files. The boot loader should be customised to support this as two binaries-kernel image and the dtb file - are to be loaded in memory. The boot loader passes hardware descriptions to the kernel in the form of dtb files. Recent kernel versions come with a built-in device tree compiler, which can generate all dtb files related to the selected architecture family from device tree source (dts) files. Using the device tree for ARM has become mandatory for all new SOCs, with support from recent kernel versions.
82 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Let's Try
Open Gurus
Building QEMU from sources
You may obtain pre-built QEMU binaries from your distro repositories or build QEMU from sources, as follows. Download the recent stable version of QEMU, say qemu2.0.tar.bz2, extract and build it: tar -zxvf qemu-2.0.tar.bz2 cd qemu-2.0 ./configure --target-list=arm-softmmu, arm-linux-user --prefix=/opt/qemu-arm make make install
Figure 1: Kernel configuration–main menu
You will observe commands like qemu-arm, qemusystem-arm, qemu-img under /opt/qemu-arm/bin. Among these, qemu-system-arm is useful to emulate the whole system with OS support.
Building mkimage
The mkimage command is used to create images for use with the u-boot boot loader. Here, we'll use this tool to transform the kernel image to be used with u-boot. Since this tool is available only through u-boot, we need to go for a quick build of this boot loader to generate mkimage. Download a recent stable version of u-boot (tested on u-boot-2014.04.tar.bz2) from ftp.denx.de/ pub/u-boot:
Preparing an image for the SD card
QEMU can emulate an image file as storage media in the form of the SD card, flash memory, hard disk or CD drive. Let’s create an image file using qemu-img in raw format and create a FAT file system in that, as follows. This image file acts like a physical SD card for the actual target board: qemu-img create -f raw sdcard.img 128M #optionally you may create partition table in this image #using tools like sfdisk, parted mkfs.vfat sdcard.img #mount this image under some directory and copy required files mkdir /mnt/sdcard mount -o loop,rw,sync sdcard.img /mnt/sdcard
Setting up the toolchain
We need a toolchain, which is a collection of various cross development tools to build components for the target platform. Getting a toolchain for your Linux kernel is always tricky, so until you are comfortable with the process please use tested versions only. I have tested with pre-built toolchains from the Linaro organisation, which can be got from the following link http://releases.linaro.org/14.0.4/ components/toolchain/binaries/gcc-linaro-arm-linuxgnueabihf-4.8-2014.04_linux.tar.xz or any latest stable version. Next, set the path for cross tools under this toolchain, as follows: tar -xvf gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux. tar.xz -C /opt export PATH=/opt/gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_ linux/bin:$PATH
You will notice various tools like gcc, ld, etc, under /opt/ gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux/bin with the prefix arm-linux-gnueabihf-
tar -jxvf u-boot-2014.04.tar.bz2 cd u-boot-2014.04 make tools-only
Now, copy mkimage from the tools directory to any directory under the standard path (like /usr/local/bin) as a super user, or set the path to the tools directory each time, before the kernel build.
Building the Linux kernel
Download the most recent stable version of the kernel source from kernel.org (tested with linux-3.14.10.tar.xz): tar -xvf linux-3.14.10.tar.gz cd linux-3.14.10 make mrproper #clean all built files and configuration files make ARCH=arm vexpress_defconfig #default configuration for given board make ARCH=arm menuconfig #customize the configuration
Then, to customise kernel configuration (Figure 1), follow the steps listed below: 1) Set a personalised string, say ‘-osfy-fdt’, as the local version of the kernel under general setup. 2) Ensure that ARM EABI and old ABI compatibility are enabled under kernel features. 3) Under device drivers--> block devices, enable RAM disk support for initrd usage as static module, and increase default size to 65536 (64MB). You can use arrow keys to navigate between various options
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 83
In the above command, we are treating rootfs as ‘initrd image’, which is fine when rootfs is of a small size. You can connect larger file systems in the form of a hard disk or SD card. Let’s try out rootfs through an SD card: Figure 2: Kernel configuration–RAM disk support
and space bar to select among various states (blank, m or *) 4) Make sure devtmpfs is enabled under the Device Drivers and Generic Driver options. Now, let’s go ahead with building the kernel, as follows: #generate kernel image as zImage and necessary dtb files make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- zImage dtbs #transform zImage to use with u-boot make ARCH=arm CROSS_COMPILE=arm-linuxgnueabihf- uImage \ LOADADDR=0x60008000 #copy necessary files to sdcard cp arch/arm/boot/zImage /mnt/sdcard cp arch/arm/boot/uImage /mnt/sdcard cp arch/arm/boot/dts/*.dtb /mnt/sdcard #Build dynamic modules and copy to suitable destination make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- modules make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihfmodules_install \ INSTALL_ MODPATH=
You may skip the last two steps for the moment, as the given configuration steps avoid dynamic modules. All the necessary modules are configured as static.
Getting rootfs
We require a file system to work with the kernel we’ve built. Download the pre-built rootfs image to test with QEMU from the following link: http://downloads.yoctoproject.org/ releases/yocto/yocto-1.5.2/machines/qemu/qemuarm/coreimage-minimal-qemuarm.ext3 and copy it to the SD card (/ mnt/image) by renaming it as rootfs.img for easy usage. You may obtain the rootfs image from some other repository or build it from sources using Busybox.
In case the sdcard/image file holds a valid partition table, we need to refer to the individual partitions like /dev/mmcblk0p1, /dev/ mmcblk0p2, etc. Since the current image file is not partitioned, we can refer to it by the device file name /dev/mmcblk0.
Building u-boot
Switch back to the u-boot directory (u-boot-2014.04), build u-boot as follows and copy it to the SD card: make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- vexpress_ ca9x4_config make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihfcp u-boot /mnt/image # you can go for a quick test of generated u-boot as follows qemu-system-arm -M vexpress-a9 -kernel /mnt/sdcard/u-boot -serial stdio
Let’s ignore errors such as ‘u-boot couldn't locate kernel image’ or any other suitable files.
The final steps
Let’s boot the system with u-boot using an image file such as SD card, and make sure the QEMU PATH is not disturbed. Unmount the SD card image and then boot using QEMU. umount /mnt/sdcard
Your first try
Let’s boot this kernel image (zImage) directly without u-boot, as follows: export PATH=/opt/qemu-arm/bin:$PATH
Figure 3: U-boot loading
84 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Ensure a space before and after the ‘–’ symbol in the above command. Log in using ‘root’ as the username and a blank password to play around with the system. I hope this article proves useful for bootstrapping with embedded Linux and for teaching the concepts when there is no hardware available.
Figure 4: Loading of kernel with FDT support qemu-system-arm -M vexpress-a9 -sd sdcard.img -m 1024 -serial stdio -kernel u-boot
Acknowledgements
You can stop autoboot by hitting any key within the time limit and enter the following commands at the u-boot prompt to load rootfs.img, uimage, dtb files from the SD card to suitable memory locations without overlapping. Also, set the kernel boot parameters using setenv as shown below (here, 0x82000000 stands for the location of the loaded rootfs image and 8388608 is the size of the rootfs image). Note: The following commands are internal to u-boot and must be entered within the u-boot prompt.
I thank Babu Krishnamurthy, a freelance trainer for his valuable inputs on embedded Linux and omap hardware during the course of my embedded journey. I am also grateful to C-DAC for the good support I’ve received.
References [1] elinux.org/Qemu [2] Device Tree for Dummies by Thomas Petazzoni (freeelectrons.com) [3] Few inputs taken from en.wikipedia.org/wiki/Device_tree [4] mkimage man page from u-boot documentation
By: Rajesh Sola
fatls mmc 0:0 #list out partition contents fatload mmc 0:0 0x82000000 rootfs.img # note down the size of image being loaded
The author is a faculty member of C-DAC's Advanced Computing Training School, Pune, in the embedded systems domain. You can reach him at [email protected].
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 85
Open Gurus
How To
Contiki OS trollers ocon r c i M g n i t c e n n Co gs n i h T f o t e n r e t n to the I
C
ontiki is an open source operating system for connecting tiny, low-cost, low-power microcontrollers to the Internet. It is preferred because it supports various Internet standards, rapid development, a selection of hardware, has an active community to help, and has commercial support bundled with an open source licence. Contiki is designed for tiny devices and thus the memory footprint is far less when compared with other systems. It supports full TCP with IPv6, and the device’s power management is handled by the OS. All the modules of Contiki are loaded and unloaded during run time; it implements protothreads, uses a lightweight file system, and various hardware platforms with ‘sleepy’ routers (routers which sleep between message relays). One important feature of Contiki is its use of the Cooja simulator for emulation in case any of the hardware devices are not available.
Step 3: Open the Virtual Machine and open the Contiki OS; then wait till the login screen appears. Step 4: Input the password as ‘user’; this shows the desktop of Ubuntu (Contiki).
Running the simulation
To run a simulation, Contiki comes with many prebuilt modules that can be readily run on the Cooja simulator or on the real hardware platform. There are two methods of opening the Cooja simulator window. Method 1: In the desktop, as shown in Figure 1, double click the Cooja icon. It will compile the binaries for the first time and open the simulation windows. Method 2: Open the terminal and go to the Cooja directory: pradeep@localhost$] cd contiki/tools/cooja pradeep@localhost$] ant run
Installation of Contiki
Contiki can be downloaded as ‘Instant Contiki’, which is available in a single download that contains an entire Contiki development environment. It is an Ubuntu Linux virtual machine that runs in VMware Player, and has Contiki and all the development tools, compilers and simulators used in Contiki development already installed. Most users prefer Instant Contiki over the source code binaries. The current version of Contiki (at the time of writing this post) is 2.7. Step 1: Install VMware Player (which is free for academic and personal use). Step 2: Download the Instant Contiki virtual image of size 2.5 GB, approximately (http://sourceforge.net/projects/ contiki/files/Instant%20Contiki/) and unzip it.
As the Internet of Things becomes more of a reality, Contiki, an open source OS, allows DIY enthusiasts to experiment with connecting tiny, low-cost, low-power microcontrollers to the Internet.
You can see the simulation window as shown in Figure 2.
Creating a new simulation
To create a simulation in Contiki, go to File menu → New Simulation and name it as shown in Figure 3. Select any one radio medium (in this case) -> Unit Disk Graph Medium (UDGM): Distance Loss and click ‘Create’. Figure 4 shows the simulation window, which has the following windows. Network window: This shows all the motes in the simulated network. Timeline window: This shows all the events over the time. Mote output window: All serial port outputs will be shown here.
86 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
How To
Open Gurus
Figure 1: Contiki OS desktop
Figure 3: New simulation
Figure 2: Cooja compilation
Notes window: User notes information can be put here. Simulation control window: Users can start, stop and pause the simulation from here.
Adding the sensor motes
Figure 4: Simulation window
Once the simulation window is opened, motes can be added to the simulation using Menu: Motes-> Add Motes. Since we are adding the motes for the first time, the type of mote has to be specified. There are more than 10 types of motes supported by Contiki. Here are some of them: MicaZ Sky Trxeb1120 Trxeb2520 cc430 ESB eth11 Exp2420 Exp1101 Exp1120 WisMote Z1 Contiki will generate object codes for these motes to run on the real hardware and also to run on the simulator if the hardware platform is not available. Step 1: To add a mote, go to Add Motes→Select any of the motes given above→MicaZ mote. You will get the screen shown in Figure 5. Step 2: Cooja opens the Create Mote Type dialogue box, which gives the name of the mote type as well as the Contiki application that the mote type will run. For this example, click the button on the right hand side to choose
the Contiki application and select /home/user/contiki/examples/hello-world/hello-world.c. Then, click Compile. Step 3: Once compiled without errors, click Create (Figure 5). Step 4: Now the screen asks you to enter the number of motes to be created and their positions (random, ellipse, linear or manual positions). In this example, 10 motes are created. Click the Start button in the Simulation Control window and enable the mote's Log Output: printf() statements in the View menu of the Network window. The Network window shows the output ‘Hello World’ in the sensors. Figure 6 illustrates this. This is a simple output of the Network window. If the real MicaZ motes are connected, the Hello World will be displayed in the LCD panel of the sensor motes. The overall output is shown in Figure 7. The output of the above Hello World application can also be run using the terminal. To compile and test the program, go into the helloworld directory: pradeep@localhost $] cd /home/user/contiki/examples/helloworld pradeep@localhost $] make
This will compile the Hello World program in the native target, which causes the entire Contiki operating system and the Hello World application to be compiled into a single program that can be run by typing the following command (depicted in Figure 8):
www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 87
Open Gurus
How To
Figure 5: Mote creation and compilation in Contiki
Figure 7: Simulation window of Contiki
Figure 8: Compilation using the terminal
Here is the C source code for the above Hello World application.
Figure 6: Log output in motes pradeep@localhost$] ./hello-world.native This will print out the following text: Contiki initiated, now starting process scheduling Hello, world
The program will then appear to hang, and must be stopped by pressing Control + C.
Developing new modules
Contiki comes with numerous pre-built modules like IPv6, IPV6 UDP, hello world, sensor nets, EEPROM, IRC, Ping, Ping-IPv6, etc. These modules can run with all the sensors irrespective of their make. Also, there are modules that run only on specific sensors. For example, the energy of a sky mote can be used only on Sky Motes and gives errors if run with other motes like Z1 or MicaZ. Developers can build new modules for various sensor motes that can be used with different sensor BSPs using conventional C programming, and then be deployed in the corresponding sensors.
The Internet of Things is an emerging technology that leads to concepts like smart cities, smart homes, etc. Implementing the IoT is a real challenge but the Contiki OS can be of great help here. It can be very useful for deploying applications like automatic lighting systems in buildings, smart refrigerators, wearable computing systems, domestic power management for homes and offices, etc. References [1] http://www.contiki-os.org/
By: T S Pradeep Kumar The author is a professor at VIT University, Chennai. He has two websites http://www.nsnam.com and http://www.pradeepkumar. org. He can be contacted at [email protected].
88 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Let’s Try For U & Me
This article introduces the reader to Nix, a reliable, multi-user, multi-version, portable, reproducible and purely functional package manager. Software enthusiasts will find it a powerful package manager for Linux and UNIX systems.
L
inux is versatile and full of choices. Every other day you wake up to hear about a new distro. Most of these are based on a more famous distro and use its package manager. There are many package managers like Zypper and Yum for Red Hat-based systems; Aptitude and apt-get for Debian-based systems; and others like Pacman and Emerge. No matter how many package managers you have, you may still run into dependency hell or you may not be able to install multiple versions of the same package, especially for tinkering and testing. If you frequently mess up your system, you should try out Nix, which is more than “just another package manager.” Nix is a purely functional package manager. According to its site, “Nix is a powerful package manager for Linux and other UNIX systems that makes package management reliable and reproducible. It provides atomic upgrades and roll-backs, side-by-side installation of multiple versions of a package, multi-user package management and easy set-up of build environments.” Here are some reasons for which the site recommends you ought to try Nix. Reliable: Nix’s purely functional approach ensures that installing or upgrading one package cannot break other packages. Reproducible: Nix builds packages in isolation from each other. This ensures that they are reproducible and do not have undeclared dependencies. So if a package works on one machine, it will also work on another. It’s great for developers: Nix makes it simple to set up and share build environments for your projects, regardless of what programming languages and tools you’re using. Multi-user, multi-version: Nix supports multi-user package management. Multiple users can share a common Nix store securely without the need to have root privileges to install software, and can install and use different versions of a package. Source/binary model: Conceptually, Nix builds packages from source, but can transparently use binaries from a binary cache, if available. Portable: Nix runs on Linux, Mac OS X, FreeBSD and
other systems. Nixpkgs, the Nix packages collection, contains thousands of packages, many pre-compiled.
Installation
Installation is pretty straightforward for Linux and Macs; everything is handled magically for you by a script, but there are some pre-requisites like sudo, curl and bash, so make sure you have them installed before moving on. Type the following command at a terminal: bash <(curl https://nixos.org/nix/install)
It will ask for sudo access to create a directory named Nix. You may see something similar to what’s shown in Figure 1. There are binary packages available for Nix but we are looking for a new package manager, so using another package manager to install it is bad form (though you can, if you want to). If you are running another distro with no binary packages while also running Darwin or OpenBSD, you have the option of installing it from source. To set the environment variables right, use the following command: ./~/.nix-profile/etc/profile.d/nix.sh
Usage
Now that we have Nix installed, let’s use it for further testing. To see a list of installable packages, run the following: nix-env -qa
This will list the installable packages. To search for a specific package, pipe the output of the previous command to Grep with the name of the target package as the argument. Let’s search for Ruby, with the following command: nix-env -qa | grep ruby
It informs us that there are three versions of Ruby available.
www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 89
For U & Me
Let’s Try
Let’s install Ruby 2.0. There are two ways to install a package. Packages can be referred to by two identifiers. The first one is the name of the package, which might not be unique, and the second is the attribute set value. As the result of our search for the various Ruby versions showed that the name of the package for Ruby 2.0 is Ruby-2.0.0-p353, let’s try to install it, as follows: nix-env - i ruby-2.0.0-p353
It gives the following error as the output:
Figure 1: Nix installation
error: unable to fork: Cannot allocate memory nix-env: src/libutil/util.cc:766: int nix::Pid::wait(bool): Assertion `pid != -1’ failed. Aborted (core dumped)
As per the Nix wiki, the name of the package might not be unique and may yield an error with some packages. So we could try things out with the attribute set value. For Ruby 2.0, the attribute set value is nixpkgs.ruby2 and can be used with the following command: Figure 2: Nix search result
Figure 3: Package and attribute usage
nix-env -iA nixpkgs.ruby2
This worked. Notice the use of -iA flag when using the attribute set value. I talked to Nix developer Domen Kožar about this and he said, “Multiple packages may share the same name and version; that’s why using attribute sets is a better idea, since it guarantees uniqueness. This is some kind of a downside of Nix, but this is how it functions :)” To see the attribute name and the package name, use the following command:
To update a specific package and all its dependencies, use: nix-env -uA nixpkgs.package_attribute_name
To update all the installed packages, use: nix-env -u
To uninstall a package, use: nix-env -e package_name
In my case, while using Ruby 2.0, I replaced it with Ruby2.0.0-p353, which was the package name and not the attribute name. Well, that’s just the tip of the iceberg. To learn more, refer to the Nix manual http://nixos.org/nix/ manual. There is a distro named NixOS, which uses Nix for both configuration and package management.
References nix-env -qaP | grep package_name
In case of Ruby, I replaced the package_name with ruby2 and it yielded: nixpkgs.ruby2 ruby-2.0.0-p353
[1] https://www.domenkozar.com/2014/01/02/getting-started-with-nix-package-manager/ [2] http://nixos.org/nix/manual/ [3] http://nixer.ghost.io/why/ - To convince yourself to use Nix
By: Jatin Dhankhar By: Anil Pugalia The author is a Kumar C++ lover and a Rubyist. His areas of interest include robotics, programming and Web development. He can be reached at [email protected].
90 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Let’s Try For U & Me
Solve Engineering Problems with Laplace Transforms Laplace transforms are integral mathematical transforms widely used in physics and engineering. In this 21st article in the series on mathematics in open source, the author demonstrates Laplace transforms through Maxima.
I
n higher mathematics, transforms play an important role. A transform is mathematical logic to transform or convert a mathematical expression into another mathematical expression, typically from one domain to another. Laplace and Fourier are two very common examples, transforming from the time domain to the frequency domain. In general, such transforms have their corresponding inverse transforms. And this combination of direct and inverse transforms is very powerful in solving many real life engineering problems. The focus of this article is Laplace and its inverse transform, along with some problem-solving insights.
(%o1) (%o2)
1/s^2
(%i3) string(laplace(t^2, t, s)); (%o3)
2/s^3
(%i4) string(laplace(t+1, t, s)); (%o4)
1/s+1/s^2
(%i5) string(laplace(t^n, t, s)); Is n + 1 positive, negative, or zero? p; /* Our input */ (%o5)
The Laplace transform
1/s
(%i2) string(laplace(t, t, s));
gamma(n+1)*s^(-n-1)
(%i6) string(laplace(t^n, t, s));
Mathematically, the Laplace transform F(s) of a function f(t) is defined as follows:
Is n + 1 positive, negative, or zero? n; /* Our input */ (%o6)
gamma_incomplete(n+1,0)*s^(-n-1)
…where ‘t’ represents time and ‘s’ represents complex angular frequency. To demonstrate it, let’s take a simple example of f(t) = 1. Substituting and integrating, we get F(s) = 1/s. Maxima has the function laplace() to do the same. In fact, with that, we can choose to let our variables ‘t’ and ‘s’ be anything else as well. But, as per our mathematical notations, preserving them as ‘t’ and ‘s’ would be the most appropriate. Let’s start with some basic Laplace transforms. (Note that string() has been used to just flatten the expression.)
(%i7) string(laplace(t^n, t, s));
$ maxima -q
In the above examples, the expression is preserved as is, in case of non-solvability.
(%i1) string(laplace(1, t, s));
Is n + 1 positive, negative, or zero? z; /* Our input, making it non-solvable */ (%o7)
www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 91
For U & Me
Let’s Try
laplace() is designed to understand various symbolic functions, such as sin(), cos(), sinh(), cosh(), log(), exp(), delta(), erf(). delta() is the Dirac delta function, and erf() is the error function—others being the usual mathematical functions.
laplace() also understands derivative() / diff(), integrate(), sum(), and ilt() - the inverse Laplace transform. Here are some interesting transforms showing the same:
A Laplace transform is typically a fractional expression consisting of a numerator and a denominator. Solving the denominator, by equating it to zero, gives the various complex frequencies associated with the original function. These are called the poles of the function. For example, the Laplace transform of sin(w * t) is w/(s^2 + w^2), where the denominator is s^2 + w^2. Equating that to zero and solving it, gives the complex frequency s = +iw, -iw; thus, indicating that the frequency of the original expression sin(w * t) is ‘w’, which indeed it is. Here are a few demonstrations of the same:
t^5+t^4+t^3+t^2+t+1
(%i7) string(laplace(sum(t^i, i, 0, 5), t, s)); (%o7)
Note the usage of ilt() - inverse Laplace transform in the %i8 of the above example. Calling laplace() and ilt() one after the other cancels their effect—that is what is meant by inverse. Let’s look into some common inverse Laplace transforms.
Inverse Laplace transforms
$ maxima -q (%i1) string(laplace(sin(w*t), t, s)); (%o1)
w/(w^2+s^2)
$ maxima -q
(%i2) string(denom(laplace(sin(w*t), t, s))); /* The Denominator
(%i1) string(ilt(1/s, s, t));
*/
(%o1)
(%o2)
w^2+s^2
(%i3) string(solve(denom(laplace(sin(w*t), t, s)), s)); /* The
That gives us f(t) = (e^t – e^-t) / 2, i.e., sinh(t), which definitely satisfies the given differential equation. Similarly, we can solve equations with integrals. And not just integrals, but also equations with both differentials and integrals. Such equations come up very often when solving problems linked to electrical circuits with resistors, capacitors and inductors. Let’s again look at a simple example that demonstrates the fact. Let’s assume we have a 1 ohm resistor, a 1 farad capacitor, and a 1 henry inductor in series being powered by a sinusoidal voltage source of frequency ‘w’. What would be the current in the circuit, assuming it to be zero at t = 0? It would yield the following equation: R * i(t) + 1/C * ∫ i(t) dt + L * di(t)/dt = sin(w*t), where R = 1, C = 1, L =1. So, the equation can be simplified to i(t) + ∫ i(t) dt + di(t)/ dt = sin(w*t). Now, following the procedure as described above, let’s carry out the following steps:
Observe that if we take the Laplace transform of the above %o outputs, they would give back the expressions, which are input to ilt() of the corresponding %i’s. %i18 specifically shows one such example. It does laplace() of the output at %o16, giving back the expression, which was input to ilt() of %i16.
Solving differential and integral equations
Now, with these insights, we can easily solve many interesting and otherwise complex problems. One of them is solving differential equations. Let’s explore a simple example of solving f’(t) + f(t) = e^t, where f(0) = 0. First, let’s take the Laplace transform of the equation. Then substitute the value for f(0), and simplify to obtain the Laplace of f(t), i.e., F(s). Finally, compute the inverse Laplace transform of F(s) to get the solution for f(t).
Substituting i(0) as 0, and simplifying, we get laplace(i(t), t, s) = w/((w^2+s^2)*(s+1/s+1)). Solving that by inverse Laplace transform, we very easily get the complex expression for i(t) as follows: (%i2) string(ilt(w/((w^2+s^2)*(s+1/s+1)), s, t)); Is w zero or nonzero? n; /* Our input: Non-zero frequency */ (%o2) w^2*sin(t*w)/(w^4-w^2+1)-(w^3-w)*cos(t*w)/(w^4-w^2+1)+%e^(t/2)*(sin(sqrt(3)*t/2)*(-(w^3-w)/(w^4-w^2+1)-2*w/(w^4-w^2+1))/ sqrt(3)+cos(sqrt(3)*t/2)*(w^3-w)/(w^4-w^2+1)) (%i3) quit();
By: Anil Kumar Pugalia The author is aKumar gold medallist from NIT Warangal and IISc By: Anil Pugalia Bangalore and he is also a hobbyist in open source hardware and software, with a passion for mathematics. Learn more about him and his experiments at http://sysplay.in. He can be reached at [email protected].
www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 93
For U & Me
Let's Try
h s Z h t i w l l e Your Sh h s Z y M h O and ng owerful scripti ive use. p a , ll e sh Z e for interact Discover th h is designed ic h w e, ag u g n la
Z
shell (zsh) is a powerful interactive login shell and command interpreter for shell scripting. A big improvement over older shells, it has a lot of new features and the support of the Oh-My-Zsh framework that makes using the terminal fun. Released in 1990, the zsh shell is fairly new compared to its older counterpart, the bash shell. Although more than a decade has passed since its release, it is still very popular among programmers and developers who use the commandline interface on a daily basis.
Why zsh is better than the rest
Most of what is mentioned below can probably be implemented or configured in the bash shell as well; however, it is much more powerful in the zsh shell.
Advanced tab completion
Tab completion in zsh supports the command line option for the auto completion of commands. Pressing the tab key twice enables the auto complete mode, and you can cycle through the options using the tab key. You can also move through the files in a directory with the tab key. Zsh has tab completion for the path of directories or files in the command line too. Another great feature is that you can switch paths by using 1 to switch to the previous path, 2 to switch to the ‘previous, previous’ path and so on.
Real time highlighting and themeable prompts
To include real time highlighting, clone the zsh-syntaxhighlighting repository from github (https://github.com/zshusers/zsh-syntax-highlighting). This makes the command-
line look stunning. In some terminals, existing commands are highlighted in green and those typed incorrectly are highlighted in red. Also, quoted text is highlighted in yellow. All this can be configured further according to your needs. Prompts on zsh can be customised to be right-aligned, left-aligned or as multi-lined prompts.
Globbing
Wikipedia defines globbing as follows: “In computer programming, in particular in a UNIX-like environment, the term globbing is sometimes used to refer to pattern matching based on wildcard characters.” Shells before zsh also offered globbing; however, zsh offers extended globbing. Extra features can be enabled if the EXTENDEDGLOB option is set.
94 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Let's Try
For U & Me
Here are some examples of the extended globbing offered by zsh. The ^ character is used to negate any pattern Figure 1: Tab completion for command options following it. setopt EXTENDEDGLOB # Enables extended globbing in zsh. Figure 2: Tab completion for files ls *(.) # Displays all regular files. commands. Most other shells have aliases but zsh supports global aliases. These are aliases that are substituted anywhere ls -d ^*.c # Displays in the line. Global aliases can be used to abbreviate all directories and files that are not cpp files. frequently-typed usernames, hostnames, etc. Here are some ls -d ^*.* # Displays examples of aliases: directories and files that have no extension. ls -d ^file # Displays everything in directory except file called file. alias -g mr=’rm’ ls -d *.^c alias -g TL=’| tail -10’ # Displays files with extensions except .c files. alias -g NUL=”> /dev/null 2>&1”
Installing zsh
An expression of the form matches a range of integers. Also, files can be grouped in the search pattern. % ls (foo|bar).* bar.o foo.c foo.o % ls *.(c|o|pro) bar.o file.pro foo.c
To install zsh in Ubuntu or Debian-based distros, type the following: sudo apt-get update && sudo apt-get install zsh # install zsh chsh -s /bin/zsh # to make zsh your default shell
foo.o
main.o
q.c
To exclude a certain file from the search, the ‘~’ character can be used. % ls *.c foo.c foob.c % ls *.c~bar.c foo.c foob.c % ls *.c~f* bar.c
The .zshrc file looks something like what is shown in Figure 4. Add your own aliases for commands you use frequently.
Customising zsh with Oh-My-Zsh
These and several more extended globbing features can help immensely while working through large directories
Case insensitive matching
Zsh supports pattern matching that is independent of whether the letters of the alphabet are upper or lower case. Zsh first surfs through the directory to find a match, and if one does not exist, it carries out a case insensitive search for the file or directory.
Running shells share command history, thereby eradicating the difficulty of having to remember the commands you typed earlier in another shell.
Aliases
Aliases are used to abbreviate commands and command options that are used very often or for a combination of
sudo zypper install zsh finger yoda | grep zsh
Configuring zsh
bar.c
Sharing of command history among running shells
To install it on SUSE-based distros, type:
Oh-My-Zsh is believed to be an open source community-driven framework for managing the zsh configuration. Although zsh is powerful in comparison to other shells, its main attraction is the themes, plugins and other features that come with it. To install Oh-My-Zsh you need to clone the Oh-My-Zsh repository from Github (https://github.com/robbyrussell/ oh-my-zsh). A wide range of themes are available so there is something for everybody. To clone the repository from Github, use the following command. This installs Oh-My-Zsh in ~/.oh-my-zsh (a hidden directory in your home directory). The default path can be changed by setting the environment variable for zsh using export ZSH = /your/path git clone https://github.com/robbyrussell/oh-my-zsh.git
To install Oh-My-Zsh via curl, type: curl -L http://install.ohmyz.sh | sh www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 95
For U & Me
Let's Try
Figure 3: View previous paths
Figure 4: ~/.zshrc file
Figure 5: Setting aliases in ~/.zshrc file
To install it via wget, type: wget —no-check-certificate http://install.ohmyz.sh -O - | sh
To customise zsh, create a new zsh configuration, i.e., a ~/.zshrc file by copying any of the existing templates provided: cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc
Restart your zsh terminal to view the changes.
Plugins
To check out the numerous plugins offered in Oh-My-Zsh, you can go to the plugins directory in ~/.oh-my-zsh. To enable these plugins, add them to the ~/.zshrc file and then source them. cd ~/.oh-my-zsh vim ~/.zshrc source ~/.zshrc
If you want to install some plugin that is not present in the plugins directory, you can clone the plugin from Github or install it using wget or curl and then source the plugin.
Themes
To view the themes in zsh go to the themes/ directory. To change your theme, set ZSH_THEME in ~/.zshrc to the theme
you desire and then source Oh-My-Zsh. If you do not want any theme enabled, set ZSH_THEME = “”. If you can’t decide on a theme, you can set ZSH_THEME = “random”. This will change the theme every time you open a shell and you can decide upon the one that you find most suitable for your needs. To make your own theme, copy any one of the existing themes from the themes/ directory to a new file with a “zshtheme” extension and make your changes to that. A customised theme is shown in Figure 6. Here, the user name, represented by %n, has been set to the colour green and the computer name, represented by %m, has been set to the colour cyan. This is followed by the path represented by %d. The prompt variable then looks like this... PROMPT=’ $fg[green]%n $fg[red]at $fg[cyan]%m-->$fg[yellow]%d: ‘
The prompt can be changed to incorporate spacing, and git states, battery charge, etc, by declaring functions that do the same. For example, here, instead of printing the entire path including /home/darshana, we can define a function such that if PWD detects $HOME, it replaces the same with “~” function get_pwd() { echo “${PWD/$HOME/~}” }
To view the status of the current Git repository, the following code can be used: function git_prompt_info() {
96 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
For U & Me
Open Strategy
Panasonic Looks to Engage with Developers in India! Panasonic entered the Indian smartphone market last year. In just one year, the company has assessed the potential of the market and has found that it could make India the headquarters for its smartphone division. But this cannot happen without that bit of extra effort from the company. While Panasonic is banking big on India’s favourite operating system, Android, it is also leaving no stone unturned to provide a unique user experience on its devices. Diksha P Gupta from Open Source For You spoke to Pankaj Rana, head, smartphones and tablets, Panasonic India Pvt Ltd, to get a clearer picture of the company’s growth plans. Excerpts…
Pankaj Rana, head, smartphones and tablets, Panasonic India Pvt Ltd
P
anasonic is all set to launch 15 smartphones and eight feature phones in India this year. While the company will keep its focus on the smartphone segment, it has no plans of losing its feature phone lovers as Panasonic believes that there is still scope for the latter in the Indian market. That said, Panasonic will invest more energy in grabbing what it hopes will be a 5 per cent share in the Indian smartphone market. And that will happen with the help of Android. Speaking
about the strategy, Pankaj Rana, head, smartphones and tablets, Panasonic India Pvt Ltd, says, “We are banking on Android purely because it provides the choice of customisation. Based on this ability of Android, we have created a very different user experience for Panasonic smartphones.” What Rana is referring to here is the single fit-home UI launched by Panasonic. He explains, “While we have provided the standard Android UI in the feature phones, the highly-efficient fit-home UI is available on Panasonic smartphones. When working on the standard Android UI, users need to use both hands to perform any task. However, the fit-home UI allows single-hand operations, making it easy for the user to function.” Yet another feature of the UI is that it can be operated in the landscape mode. Rana claims that many phones do not allow the use of various functions like settings, et al, in the landscape mode. He says, “We have kept the comfort of the users as our top priority and, hence, designed the UI in such a way that it offers a tablet-like experience as well. The Panasonic Eluga is a 12.7cm (5-inch) phone. This kind of a UI will be a great advantage on big screen devices. For users of feature phones who are migrating to smartphones now, this kind of UI makes the transition easier.”
Coming soon: An exclusive Panasonic app store
Well, if you thought the unique user experience was the end of the show, hold on. There’s more coming… The company plans to leave no stone unturned when it comes to making its Android experience complete for the Indian region. Rana reveals, “We are planning to come up with a Panasonic exclusive app store, which should come to existence in the next 3-4 months.”
98 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com
Open Strategy For U & Me “The company plans to do the hiring for the in-house team within the next six months. The team may comprise about 100 people. Rana clarifies that the developers hired in India are going to be based in Gurgaon, Bengaluru and Hyderabad.” When it comes to the development for this app store, Panasonic will look at hiring in-house developers, as well as associate with third party developers. Rana says, “We will look at all possible ways to make our app ecosystem an enriched one. Just for the record, this UI has been built within the company, with engineers from various facilities including India, Japan and Vietnam. For the exclusive app store that we are planning to build, we will have some third-party developers. But besides that, we plan to develop our in-house team as well. Right now, we have about 25 software engineers working with us in India, who are from Japan. We also have some Vietnamese resources working for us.” The company plans to do the hiring for the inhouse team within the next six months. The team may comprise about 100 people. Rana clarifies that the developers hired in India are going to be based in Gurgaon, Bengaluru and Hyderabad. He says, “We already have about 20 developers in Bengaluru, who are on third party rolls. We are in the process of switching them to the company’s rolls over the next couple of months. Similarly, we have about 10 developers in Gurgaon. In addition, our R&D team in Vietnam has 70 members. We are also planning to shift the Vietnam operations to India, making the country our smartphone headquarters.” To take the idea of the Panasonic-exclusive app store further, the company is planning some developers’ engagement activities this November and December.
The consumer is the king!
While Rana asserts that Panasonic can make one of the best offerings in the smartphone world, he recognises that consumers are looking for something different every time, when it comes to these fancy devices. He says, “Right now, companies are working on the UI level to offer that newness in the experience. But six months down the line, things will not remain the same. The situation is bound to change and, to survive in this business, developers need to predict the tastes of the consumers. But for now, it is about providing an easy experience, so that the feature phone users who are looking to migrate to smartphones find it convenient enough.” www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 99
TIPS
&
TRICKS
Convert images to PDF
Often, your scanned copies will be in an image format that you would like to convert to the PDF format. In Linux, there is an easy-to-use tool called convert that can convert any image to PDF. The following example shows you how: $convert scan1.jpg scan1.pdf
To convert multiple images into one PDF file, use the following command: $convert scan*.jpg scanned_docs.pdf
The ‘convert’ tool comes with the imagemagick package. If you do not find the convert command on your system, you will need to install imagemagick. —Madhusudana Y N, [email protected]
Your own notepad
Here is a simple and fast method to create a notepadlike application that works in your Web browser. All you need is a browser that supports HTML 5 and the commands mentioned below. Open your HTML 5 supported Web browser and paste the following code in the address bar:
You Web browser-based notepad is ready. —Chintan Umarani, [email protected]
How to find a swap partition or file in Linux
Swap space can be a dedicated swap partition, a swap file, or a combination of swap partitions and swap files. To find a swap partition or file in Linux, use the following command: swapon -s
Or… cat /proc/swaps
…and the output will be something like what’s shown below: Filename /dev/sda5
Type
Size
Used
partition 2110460
Priority 0 -1
Here, the swap is a partition and not a file.
data:text/html,
Then use the following code: data:text/html, Text Editor