ROUNDUP: PHOTO MANAGERS
65
Pages of tutorials and features Discover KDE Plasma 5 Control your partitions Buy the best Linux laptop Coding Academy: Perl 6 and Chess in Python
Get into Linux today!
HACK IT! Power your home with Raspberry Pi Six fun projects to try today Automate your lighting Monitor your heating Secure your home
Exclusive!
OggCamp 2015 OggCamp is all about diversity and it is my highlight of the year Martin Wimpress on Ubuntu Pi p44 Easy encryption
Gnome games
Secure your drive data with this encryption system
Take the new Linux games manager for a spin
ZuluCrypt
Ultimate gaming
Raspberry Pi Zero Full in-depth review Eben Upton explains all
Welcome Get into Linux today!
What we do We support the open source community by providing a resource of information, and a forum for debate. We help all readers get more from Linux with our tutorials section – we’ve something for everyone! We license all the source code we print in our tutorials section under the GNU GPLv3. We give you the most accurate, unbiased and up-to-date information on all things Linux.
Who we are This issue we asked our experts: Linux and its developers can be pretty smart, but what could be smarter in the world of Linux?
Jonni Bidwell Douglas Adams once said that he was “rarely happier than when spending an entire day programming my computer to perform automatically a task that would otherwise take me a good ten seconds to do by hand.” It’s a sentiment I share entirely and it’s my general excuse whenever my copy is late for Linux Format.
Neil Bothwick Most of my life is driven by shell and Python scripts. My ultimate goal in home automation is to reach the level achieved by Wallace in The Wrong Trousers. Although I may have to use Windows to control it as one lesson from that film is that letting anything penguin-powered near such a setup can have disastrous consequences.
Matthew Hanson I love tinkering and building PCs at home, which inevitably involves a certain amount of troubleshooting to find out what’s gone wrong when things break. This is really frustrating, so I’d like to build a robot that would do that for me. Of course, if I needed to troubleshoot that robot, I’d need to build another… And another… what have I done?
Les Pounder While I enjoy walking my dog, all that cold wintry weather is awful, even if my beard keeps me cosy.A GPS-controlled dog walking robot would be great. I could plot the route and schedule a time from my phone, and receive Twitter updates during the walk. Oh and it could also get some bread on the way home.
Nick Peers I’d like to automate our home’s lighting system. First off, it’s good for the environment, but it’s also very good for my wallet. It would also help mitigate the effects of my family’s apparent allergy to switching off lights when they leave a room. When things get really tough, I could always initiate ‘disco mode’ for a family dance off.
The internet of Linux things Smart homes, smart TVs, smart watches, smart phones, smart fridges: is there anything that isn’t smart these days? Smart humans, might be a useful start. The startling thing is that behind that huge list of smart things is Linux (and perhaps a bit of BSD). The open source nature of Linux; its lightweight footprint and robust security (though nothing is foolproof) makes it perfect for use in tiny, deployable, internet-connected smart things. Linux has enabled an entire generation of new gadgets, but alongside the software you do need hardware. That’s where the Raspberry Pi plays its part. We now all have access to the software smarts and handy hardware to build our own tiny internet-connected devices that can transform a home, school or office into an automated paradise. This issue we’re looking at how you can hack your home with Linux and the Pi. We’re basing a lot of these projects on the Pi, but you could adapt them to any tiny PC or even Arduino boards. It’s the ubiquity of the Pi that makes it our go-to [no gotos! – Ed] device of choice for home hacks (see p34). We hope you find it inspiring, as what you can do is only limited by your imagination! With no Pi User section this issue, we’ve bolstered our features and tutorials for those more interested in using Linux on their PCs. Importantly we’re finally covering the thing called KDE Plasma 5 (see p59), though that’s KDE Plasma and KDE Framework. It’s a stunning desktop, community project and interface ecosystem that’s worth your attention. We also delve into getting a Linux laptop, it’s no easy task, you’ll have to get your hands dirty but it’s possible and worth it. With tutorials covering encryption, partitioning, network enhancement and drive encryption, plus coding chess in Python and the cool new Perl 6 there’s surely something for everyone? Enjoy your hacking.
Neil Mohr Editor
[email protected]
Subscribe & save!
On digital and print, see p32 www.techradar.com/pro
January 2016 LXF206 3
Contents
“If you don’t want a generation of robots, fund the arts!” – Cath Crowley, Graffiti Moon
Reviews Motorola Moto 360 2.0 ..... 17
The technology world asks: “Why aren’t you all wearing smart watches?”And an uncaring world stares blankly back. Perhaps the new Moto 360 can make it to our wrists?
There’s no doubt some smart watches do indeed look very smart.
Google Nexus 6P ..............18
HACK IT! Transform your home into a smart one with Pi-powered projects and Linux tools p34
Roundup: Photo managers p26
The latest all-powerful Android smartphone released under the Nexus range is from Huawei – will it make it into your pocket?
Google Nexus 5X ..............19
Finally the best Android phone of all time, the Nexus 5, gets an upgrade. Is this a worthy replacement for that stalwart device?
Raspberry Pi Zero ............ 20 No one expects the Raspberry Pi Foundation! Here it goes again with an allnew Pi, the smallest and cheapest one yet!
You thought PCs couldn’t get any smaller – meet the Pi Zero.
Fedora 23 Workstation ... 22
Keep your hat on – this could be the ultimate backup, image and repair tool.
OpenSUSE Leap 42.1 ...... 23 Should you take the Leap or avoid the gecko entirely? Our verdict on the live-free distro.
Ooznest Prusa i3 .............. 24 It’s the greatest self-build 3D printer to date, the ideal buy for makers and designers.
OggCamp 2015 The unconference at OggCamp brought out the most interesting talks. We report from the liveliest camp in the world p44
4 LXF206 January 2016
www.linuxformat.com
On your FREE DVD Fedora 23 64-bit Ubuntu 15.10 64-bit, Tails 1.7 32- and 64-bit
Only the best distros every month PLUS: Hotpicks and Photo managers
p96
Subscribe b & save! p32
In-depth...
Features Buy the best Linux laptop ...........50
Discover KDE Plasma 5 ................59
Raspberry Pi Zero ................ 20
Explore the world of the Linux laptops, how you can buy one, how you can install your own and how you can build one from open hardware.
So is it KDE Plasma Desktop, or is it KDE 5 and what about KDE Framework 5? Actually it’s all of them, let’s go discover the amazing new desktop…
We get the inside scoop on making the new Raspberry Pi Zero by chatting with Eben Upton and an in-depth review of the new Pi Zero.
Linux laptops do exist, just not in the shops.
The KDE Plasma 5 desktop looks amazing!
Coding Academy
From zero to hero, the Pi is back!
Tutorials Linux games Gnome Games ...................72
Perl 6 ...................................... 84 14 years on and finally Mihalis Tsoukalos can get his hands on Perl 6. Many say “Haha, use Python” but it’s an essential upgrade to the long-standing programming language.
Matthew Hanson loves to do a bit of gaming, well, a lot of gaming. So he’s over the moon to use the new Gnome games.
Image deployment Fog.......................................74
Chess in Python ................... 88 Jonni Bidwell dreams of becoming a chess master, straddling the globe like a gaming colossus, but instead he’s sat here creating chess programs in Python.
Mayank Sharma isn’t lost in the fog he’s using it to deploy multiple images over the cloud to his many servant systems.
Regulars at a glance News............................. 6 Subscriptions ...........32 Back issues ............... 70 Help defend the GPL and get it back
Subscribe and never miss your
Ubuntu 15.10 landed and we
in the no.1 spot, 32-bit is getting so
monthly serving of lovely hot FLOSS
covered it in huge detail, grab
old Google drop support from
pie and we all love pie, right?
your celebratory issue in LXF205.
Chrome, LibreOffice gets 1,000 developers and Gimp is 20-years-old.
Sysadmin...................54 Next month ...............98 Mr. Brown is hunting ELK, not to
It’s time to rid the world of Windows
Mailserver................... 11
dine on, but log in on with. Add
10, get Linux installed on all your PCs
We love hearing from you dear
another TLA to your armoury and
and all your friends’ PCs too!
reader, so please keep on writing in!
improve your security too!
HotPicks ....................64 Alexander Tolstoy isn’t planning on
Les Pounder is recovering from
breaching Turkey’s airspace, he’s too
OggCamp 2015 at a LUG near you.
busy flying high with sweet FLOSS
N1, Double Commander, Eiskaltdcpp,
Get all your Christmas photos
Deadbeef, Lincity-ng, Powermanga,
organised with these managers.
CUDABI, Arista.
Neil Bothwick covers the essentials on creating and managing your RAIDs.
Nick Peers gets to grips with GParted and explains how to manage your partitions.
ZuluCrypt Drive encryption ............. 82
like: MATE, GNU LibreJS, Nuntius,
Roundup ....................26
RAID Manage your drives......... 76 GParted Create and manage ......... 78
Discussions blaze in this issue.
User groups................15
I can see clearly now the Fog is here.
Our subscriptions team is waiting for your call.
www.techradar.com/pro
Mayank Sharma finds his way out of the fog just in time to encrypt all his drives.
January 2016 LXF206 5
&
" + ,3
-1 +&
/,&(16,1* 1(:6
" '"# % " ## "#%"%' ##
7
' .>, H,) ).? 3 4 ! -0?E ?2)#),,L E' ?0. I>?)0. ! H? E0 E' -0?E J),L H? 02. ?0H> ,). ). E' J0>, .,).& 202, E0 >E 2>0*E? H?).& ?0#EJ> H.> E' ,). &>.E).& E'- #>0- E0 >H. ?'> . -0)#L E' ?0#EJ> ? E'L ? #)E6 ,E'0H&' E' )? 2>'2? E' ?E(+.0J. 02. ?0H> ,). . '? 0#E. . 0.?)> 0. 0# E' -). >?0.? #0> E' ?H?? 0# ).HK( ? 02>E).& ?L?E-? E' )'> E,,-.(HE'0> ,). '? #,,. ). 202H,>)EL >.E,L J)E' E' ).? E+).& E' E02 ?20E6 30E ')? )? 0>).& E0 ,)?E E ++"))+#2)
",3 " -&''4 E'E E>+? #0>&? #0H.E)0.? . 0>&.)?E)0.? E'>0H&'0HE E' 02. ?0H> 0--H.)EL46 0>).& E0 E')? ,)?E E' ). J')' 0>)&).E #>0- E' ??'H?EE? .?E)EHE 0# '.0,0&L )? H? ). G%5 0# 02. ?0H> 2>0*E? *H?E ?8H+).& 0HE J). 0I> G6N J')' )? H? L GF56 ' 2' ).? G6N 0-? ). E')> J)E' 1B 2> .E ?'>6 0 J'L E' >)? 0# E' ).?9 )>?E J ?'0H, 20).E 0HE E'E J'), E' G6N '? #,,. E0 ?0. 2, F6N )? ). #0H>E' 2, J)E' /5 J')' 0-). -+? )E ?E),, -0> 202H,> E'. E' ).6 0JI> E'> J? E)- J'. E' EJ0 I>?)0.? 0# J> -0> 202H,> E'. ,-0?E ,, E' 0E'> ,).?? 2HE E0&E'>6 . 0# E' >?0.? #0> E' ).?=? >)? )? H? )E )? ,?? >?E>)E)I E'. F6N . H? -0> 2>0*E? > ).>?).&,L ).&
/;)
H? ? 0#EJ>(?(( >I) 34 >E'> E'. ).& 2,0L J')' -.? E' 2>0EE)0. 0# L0H> 0 )?.=E 8H)E ? )-20>E.E ! 0> ?0 202, E').+6 '), E' -0> >?E>)E)I ,). '? ,0?E 0I> G15 0# )E? H?> ?'> ?). GNN/ .0E'> -0> 2>-)??)I ,). ! 2' ! '? &). 0I> 1G56 # )? E0 >-). >,I.E )E -L 'I E0 0.?)> E' '.&).& ,.?2 . J'E 202, J.E 30> .4 #>0- ?0#EJ> ,).6 )E' E' >)? 0# -0> 2>-)??)I ,).? )E=? ,?0 -0> )-20>E.E E'. I> #0> I,02>? . 2>0*E? E0 +.0J J')' ,). ?H)E? E')> .? E0 .?H> E'E E')> 0 )? 2>02>,L 2>0EE6 ?E L> E' 0#EJ> >0- J .E> #0H. E'E $615 0# 2>0&>-? E'E J> 0. )EH ' .0 ,). E ,,6
+%' + '2 -+ %' '++-' ' + ' & # - + +' 02'$
«2YHU RI SURJUDPV WKDW ZHUH RQ *LW+XE KDG QR OLFHQFH DW DOO¬ ,E'0H&' )? #,).& E' ?8HM #>0- -0> 2>-)??)I ,).? )E '? .H-> 0# ?E>0.& ,,)? ).,H).& E' 0#EJ> >0- 0.?>I.L 3++"))' '&/2# &4 J')' )? ?H220>E>(>)I. 0>&.)?E)0. E'E
ZZZOLQX[IRUPDWFRP
#0H?? 0. >)E), #> ?0#EJ> )??H? E0 ?H220>E E' &).?E I)0,E)0.? . E0 ',2 ..&> 2>0*E? >-). #> . 02. ?0H>6 )& J).? L E' 0#EJ> >00.?>I.L ). GN1$ ).,H ?H220>E).& E' ,J?H)E '>)?E02' ,,J)& >0H&'E &).?E J> ). E' #)>?E ? 0. >)IE)I J0>+? . E' @ -+).& 1AG 0.E>E0> 2L-.E? E0 I,02>? J>)E).& #> . 02. ?0H> ?0#EJ> J')' ).,H I>)0H? )##>.E ).E>.?')2? . 0.E>E ?0#EJ> I,02-.E J0>+@ . ,0.& J)E' E' > 0#EJ> 0H.E)0. ',2 .0.), ')I 0-2,). ). )E? .E,,EH, >02>EL 0,)L J')' J >20>E 0. #J -0.E'? &06 ' 0#EJ> >0- 0.?>I.L 0? I>L )-20>E.E J0>+ HE J)E' 0.,L E'> #H,,(E)- -2,0L? )E=? J'0,,L 2..E 0. )E? ?H220>E>? J')' )? J'L E' 0.?>I.L '? ,H.' >)I #0> .J ?H220>E>? E0 ',2 )E >>L 0. )E? J0>+ ! L0H . #). 0HE -0> . 0.E E0 E' H? E ++"))' '&/2# &)'-"" &+&6
1HZVGHVN 62)7:$5( '(9 1(:6
'"# #" '" " %( " # % % % # '%
/
)>##) '? .*0L ',E'L &>0JE' ). )E? 0.E>)HE0>? ?). )E? >E)0. ! J)E' -0.E',L I>& 0# 1B 202, 0.E>)HE).& 06 ')? '? ',2 )>##) ')E E' -),?E0. 0# 1NNN I,02>? E E' . 0# GN1$ J)E' &,0, -.E0>).& ##0>E L ?0- 0# E' 2>0*E=? #0H.>? ',2).& E0 ')I E'E .H->6 0JI> ? E' 02. ?0H> 0##) ?H)E '? -EH> )E? 0-2,K)EL '? &>0J. >8H)>).& .J ).#,HK 0# I,02>?6 ' #E E'E )>##) )? I),, 0. ).HK . ).0J? ? J,, ? -0), 2,E#0>-? -.? E'>=? 'H& . #0> ,>& E- J)E' >E' 0# K2>). J')' )? J'L E' )>##) 2>0*E )? ?E>E).& H2
.0E'> .J >)I E0 EE>E 1NNN -0> 0 0.E>)HE0>? E0 E' 2>0*E ' .J? 0# )>##)=? ?>' #0> .0E'> 1NNN I,02>? '? . -E J)E' J>>2E)0. #>0- E' J)> 0--H.)EL ? ,>&> I,02-.E E- ?'0H, -. #J> H&?@ 8H)+> >?20.? E)- E0 #)K).& 2>0,-? . ?EL ).#,HK 0# .J #EH>?6 ? 0. 0# E' ')&'?E 2>0#), 02. ?0H> 2>0*E? ?0- 0H, I. >&H E'E )>##) )? )E 0# . -??0> #0> 02. ?0H> #0> 202, J'0 >.=E E00 E ? J)E' 02. ?0H>6 I).& #EH> >)' . ?E, )>##) )? ). I>L0.=? ?E ).E>?E? ?0 ,E=? '02 E' .KE 1NNN I,02>? 'I J'E )E E+?6
1HZVE\WHV '>=? .J? #0> 202, >H..).& 00&,=? '>0- J >0J?> 0. FG()E ).HK )?E>)HE)0.? 3)?E>0?4 ? 00&, J),, . 0##)), ?H220>E #0> E'0? 2,E#0>-? ). >' GN1B6 '), E' >0J?> J),, ?E),, J0>+ 0. )?E>0? ,)+ H.EH >)? 31G6N%4 . ). A 3J'ML4 E'L J),, .0 ,0.&> >)I ??.E), H2E? . ?H>)EL #)K?6 00&, 0? ).E. E0 0.E).H E0 ?H220>E FG()E H), 0.#)&H>E)0.? #0> '>0-)H-6
.&).& L0H> 0+> 0.E).>? )? . )-20>E.E 2>E 0# -.L 202,=? *0? HE 0? )E 'I E0 0>).& 0.9 ' >E'> )-2>??)I 0+>>#E 2>0*E ,E? L0H I)?H,)? . -.& 0+> 0.E).>? ). ).>#E6 H> L0H . 0.,L H? )E 0. L0H> ,0, -'). #0> .0J ? )E 0?.=E ?H220>E HE'.E)E)0. HE )E ?'0J? L0H J'E )E 0# )-&).E)0. . ).I.E)I 0).& . ')I6 0H . #). 0HE -0> E ++"'))+-# ) &) &&+6
& ' -'+ + !333 +&-+ &' +%' + '& & !333 &#
62)7:$5( 1(:6
%# ( (#% % " %
2
I> GN L>? &0 E> EE)? . 2.> )-,, I,02 E' .>, -& .)2H,E)0. >0&>- J)E' E' #)>?E 2H,) >,? ). .H>L 1//B6 >I)E E0 E' ,JL?(#H.(E0(00&,(E(J0>+ )-2 E' #)>?E >,? J? 0-2E), J)E' .)K ?L?E-? ,)+ ).HK '0JI> ?). E'. )E '? . 20>E E0 ).0J? . 0E'> 02>E).& ?L?E-?6 E '? ,?0 ' )E? .- '.& 3J)E' E' ,??).& 0# )'> E,,-.4 E0 -&
.)2H,E)0. >0&>-6 0 ,>E E' GNE' ..)I>?>L 0# J'. E> EE)? ..0H. E' I),),)EL 0# .>, -& .)2H,E)0. >0&>- 0. ?.E 3J')' . > E ++"))+#2)"'+ &24 .J I>?)0. 0# )-2 3G6 61B ! J')' . 0J.,0 #>0- ++"))000#"# &) 0 '4 '? . >,?6 ')? .J I>?)0. ).,H? ?H220>E #0> ,L> &>0H2? ). 2.?E> #),? #)K? #0> ,L> &>0H2? ).
).E># )-2>0I-.E? H), ?L?E#)K? . -0>6 ' )-2 J?)E 3000#"# &4 '? ,?0 . &)I. >?)&. L E I) -+).& )E EE> #0> I)J).& 0. -0), I)? J)E' #H,, CE0- # #0> .J? )E-?6 HE .0H&' 0# ,00+).& + E J'E )-2 '? ')I ! J'E 0? E' #HEH> '0,9 0>+ )? J,, H.>JL #0> E' 2H,) I>?)0. 0# E' G6/6K ?>)? J')' J),, >).& 1B(CFG()E 2> '.., 2>0??).& ?) 2. ?H220>E I?E,L )-2>0I 0,0H> -.&-.E )-2,-.EE)0. .J E00,? 0.(.I? 2>I)J #0> -.L #),E>? . -0> . J),, 2I E' JL #0> )-2 G61N6 H>E'> 0. 0J. E' >0-2 E'> J),, 7F 20>E E0 ?H220>E ?EE(0#(E'(>E 0&>2') E,E? ). ).0J? . '02#H,,L .0.( ?E>HE)I )E).& #0> )-2 F6N . L0.6 '+ 0HE E' #H,, >0-2 E ++"))0#"# &)0) "6
ZZZWHFKUDGDUFRPSUR
& &+ & +0 '-'' '+ &' +2%/ & -+ + +& 0+ &&+#
>).).& L E' ).HK 0H.E)0. )? .0J H.>JL E0 2>2> E' E- 0. E' .E>.E)0., 2 EE)0. 3 4 E0 -)&>E E0 ).HK6 )E' 'HI, 0# .)E 2 ,,). 0.E>E0> 0--.E E'E E'L ' :-)&>E +L #H.E)0.? #>0- ).0J? E0 ).HK H? J . . 02>E).& ?L?E- E'E J? ?E, . >,), ! 0. E'E J0H, &)I H? ).('0H? 0.E>0,6 0 )# J . E0 2E' *H?E 0> 2E J 0H,6; )E' E' E>.?)E)0. E0 ).HK 0-).& ?00. )E J? I)E, E' ?E## 0. E' J> 2>2> ! J')' )? J'> E' ).HK 0H.E)0. - ). ? )E 2>0H EJ0 0H>?? ?2),,L ?)&. #0> .E>0HE)0. E0 ).HK #0> I,02>? . I,02).& 22,)E)0.? 0> ).HK6 0-E)-? &EE).& E' ?H220>E J'. L0H=> 0. >E' . '> .0H&' ! J . 0.,L )-&). J'E 2). )E J0H, ). ?26
/;)
1HZVGHVN &RPPHQW
')? -0.E' 0,,0> '? ,H.' .H-> 0# ?L E0 ?? )>EH, ').? . 2+&? #0> )>##) .,). J)E' ?-2, ).E&>E)0. J)E' J.,0H 0. 2. 1F6G6 ')? ,H.' 2>0I)? &>E JL 0# &EE).& ?? E0 . H.?E, I,02-.E I>?)0. 0# +L 2>E 0# J'E J),, 0- 0,,0> ,0HH)E 2,? 0 &)I )E E>L6 ')? ..0H.-.E 2>)->),L .#)E? I,02>? ?0 J I. - H2 J)E' 0-2,E,L '?L .- #0> )E 0,,0> .,). I,02-.E )E)0. 346 ' '02 )? E'E J),, ., I>L0. E0 2,L J)E' )-2>0I . 0.E>)HE E0 E' )>##) .,). 2>0*E J)E' ?@ I. E'0? J'0 > 2H> JC I?>)2E I,02>? . H?E0-)? . 2,L J)E' E0 E')> '>E? 0.E.E ? )E I,02?6 . E'>=? ,0E? ?E),, >-).).& E0 0. ). E' > 0# #EH> 2E' ! E')? )? I>L ?)-2,)#) ).)E), >,? HE 0# 0H>? #EH>? ?H' ? ')&' I),),)EL ?,),)EL 0,,0>E)I )E).& . -0> > ,, ).& J0>+ 0.6 E )? '0JI> J0.>#H, E0 , E0 ).E&>E J)E' E' I>L 20,)?' J.,0H 2+& ? )E ')&',)&'E? J'E )? 20??), J)E' E')? )E)0. #0> ,, 0E'> 0.E.E
.&-.E L?E-?6
0J 0? J0>+9 ,, J '0?E ,I>,L -.& . ?'> )>##) 0. E02 0# . 2. 1F6G )?E>)HE)0. ? ).?) E' 6 'E E')? 0? )? >#H,,L )?0,E H?>?= 0H-.E? #>0- ' 0E'> ). )##>.E '>00E? HE ,?0 .,? ?)&.)#).E ?'>).& 02E)0.?6 ')? -0, J=I #0H. ,?0 ,,0J? H? E0 >(H? ,-0?E ,, 0# E' )>##) 0 . )E? #EH>(?E I>E)- J')' &)I? &>E 0H-.E >.>).& . ?2>?'E ,H,E)0. #),)EL6 'L .0E &)I E>L . ?E>E &EE).& L0H> ' J>22 >0H. E' H>&0.).& > 0# #> ?0#EJ> 0.,). 0H-.E )E).&6 ),).& E'E J'L .0E 0- . ? H? ). >H??,? ). >,L >H>L GN1B E ! H>02=? ?E 02.(?0H> I,02> 0.#>.6 00+ L0H>?,# 2, .0J . =,, ? L0H E'>" # #&" # '"
" " %&" &%
/;)
%!# % " #%( " #
0$.8/8/,18;
+H,H ).HK 1N '? *H?E . >,?6 ')? )?E>0 )? H),E #>0). . H.EH 2+&? . 0-? J)E' E' )..-0. ?+E02 .I)>0.-.E6 ' ) )? E0 >E #-),)> #, #0> ).0J? H?>? J'0 'I >.E,L .E> E' >I .J J0>, 0# ).HK6
25$&/( /,18; >, '? ..0H. E' A6G >,? 0# )E? )?E>0 H),E #>0 E .E>2>)? ).HK ?0H> 06 ? J,, ? ).& ).>L 0-2E), J)E' E .E>2>)? )E 0-? J)E' EJ0 +>.,? ! < E 0-2E), >.,= . >,=? <.>+, .E>2>)? >.,= . L
H,-).E)0. 0# 0I> FFNN '0H>?= J0>+ +H,H ).HK 1N :>0; '? . KE.?)I,L E?E E0 -+ ?H> )E=? ? ?E, ? )E )? 20J>#H, -+).& )E &>E '0) #0> 0.I).).& 202, E0 )E'
)>0?0#E=? 6 ). 0HE -0> . 0J.,0 )E #>0- ++"))---1# )& 6
#H,E E' ,EE> )? ?,E J'. L0H 00E H26 '>=? ?0- 'IL HEL H2E? '> ).,H).& . H2&> I>?)0. 0# 2. 3E0 I>?)0. 16G6$4 . -).? . .0J 0.#)&H> +H-2 H>).& .0.(&>2'), ).?E,,E)0.?6 ). 0HE -0> E ++"))+#2) &-1(.,6
285 3267*5(6 6833257 ,6 7+( 8/7,0$7( 6$)(7< 1(7
([SHUW UHVSRQVH JXDUDQWHHG ZLWKLQ PLQXWHV 6D NEEDQ TMQHU@KKDC @QNTMC SGD BKNBJ A@BJTO SN /NRSFQD20+ TRDQR /K@SHMTL 2TOONQS OQNUHCDR BNLOKDSD OD@BD NE LHMC
QG4XDGUDQWFRPR@EDSX>MDS
CODING, PROJECTS AND LINUX SKILLS OUT NOW! WITH FREE DIGITAL EDITION
DELIVERED DIRECT TO YOUR DOOR
Order online at www.myfavouritemagazines.co.uk or find us in your nearest supermarket, newsagent or bookstore!
Write to us at Linux Format, Future Publishing, Quay House, The Ambury, Bath BA1 1UA or
[email protected].
ICBMs? Thank you for an interesting magazine, which i have read for several years. I would like to see a future project/article on how to adjust colours on a printer in Linux. I have a HP laser printer and run Ubuntu on my PC. The colours on my monitor are good, but when I print a picture the colours becomes very dark. I understand there are ICM profiles, but I have not found one for my printer. Can I create one myself without investing a lot of money in equipment or software? Mats Werf, via email Neil says Thanks for your kind words, they always make us blush. As for your problem and suggestion, that’s a really good idea. Both sides of colour calibration (printers and monitors) are either often overlooked or ignored, but on Linux the deviceagnostic system is the
International Color Consortium (ICC) profiles; the Windows system uses ICM profiles, but they’re based on similar systems. So the problem is that displays use a mixture of Red, Green and Blue to generate colour, which is one specific colour space with its own gamut. While printers uses Cyan, Magenta, Yellow and Black (or a variation based on those base colours) to create their colours and again this has its own colour space (a subset of every possible colour), which is different to that your monitor uses. In a perfect world your graphic driver contains the ICC for your monitor and the printer driver has the ICC for your printer. One colour space is converted to the other et voila, print outs look like they do on your screen, unless you’ve not calibrated your monitor in which case this is all a waste of time! So in a too long, didn’t read fashion: great idea for a feature!
Colour Space - the final frontier, these are the voyages of the starship CMYK.
Poor performance
In LXF200 on page 20 you state of Sabayon “Sabayon 15.06 Gnome is incredibly fast, especially when compared with its more famous peers.” Despite this it gets a 7/10 performance rating, while its ‘more famous peer’ on the previous page, Fedora, gets a 9/10. Could you please explain what objective
criteria were used to create these ratings numbers since it would seem that the numbers directly contradict the text. Evan Langlois, via email Neil says There’s a few things going on here. The first is that the ratings aren’t actually testing what you’re suggesting. Performance is a broad term that encompasses everything from boot times,
Letter of the month
Windows fan
I
I can’t believe I’m typing this (I have not used windows since Ubuntu 05) but I bought a cheap laptop which had Windows 7 installed, with the intention of installing Linux. As an experiment I downloaded the 10 upgrade and to my amazement have found it rather good (even taking into consideration the pain in finding and downloading new software and inevitable slowing down of the system). Battery life is exceptional and it is a good looking OS and very well integrated. As Microsoft has no doubt spent £millions on it (and Linux distros are often maintained by a person working from home – I think) it is not a fair comparison. I was so unnerved by this that I have taken out a Linux Format subscription and hope
that I find a Linux distro to compare ( I have hopes of Solus, for example). Kevin Garner, Shropshire Neil says: Shocking! you’re right, it certainly looks like Microsoft learnt its lesson with Windows 8 and booted Balmer out of the company, which is now far more focused and delivering products and services consumers and businesses want, which includes integrating functionality with Linux on the cloud side. Frankly though nothing has changed for Linux on the desktop, until box shifters pre-install Linux, there won’t be any major swing to using it by consumers. The one big exception here is with Chromebooks, but many Linux users wouldn’t consider this a traditional GNU/ Linux distro for good reasons. Personally, even in just the two years I’ve
www.techradar.com/pro
been on Linux Format, I think we’ve seen huge shifts for many desktop Linux distros in stability, ease of use and performance. So when people now get to use a Linux distro they’re more likely to stick with it.
Microsoft has sorted itself out and is threatening to engulf Linux once more.
January 2016 LXF206 11
Mailserver general responsiveness, to a more aspirational how it performs as “a distro”. The out of context quote you pulled is actually comparing it to other Gentoo distros in terms of speed of install, not specifically Fedora. You also ask for objective criteria, where there are none, we’re weighting this on subjective long-standing experience, and ultimately all reviews are opinion. Shashank says Sabayon is fast, but the performance rating refers to more than just speed. It’s also a measure of reliability. The installation repeatedly crashed on my test machine when I chose LVM partitioning. Even if this was a local issue I had to dock points, because other distros don’t have this problem.
I disagree
and the familiar Gnome window manager is a welcomed familiarity which I hope the developers will keep forever! It’s not broken, so why fix it? Mocheche Mabuza, via email Neil says Everyone disagrees that’s the beautiful thing about the GNU/Linux world! I like to think these features spark debate, keep people’s awareness of the distro landscape fresh and perhaps
Eternal vigilance The article in LXF203 by Bradley Kuhn on the Future of Freedom bounced me out of my comfort zone of believing that the world might be accepting software freedom – in particular Microsoft. I have never trusted Microsoft and their grudging acceptance of the existence of Linux and other free software. Not only are they inveigling their way into the
We’re looking forward to the release of Mint 17.3 and Mint 18 in 2016!
[email protected]
I disagree with your choice of the top distro, in my opinion Linux Mint is still the king of the distros. I started using Mint after I got frustrated with Windows 8 that came preinstalled with my laptop (ASUS SC400) I have been using Mint for over a year now and I have no regrets. The reason I would not use your top two suggested distros as as my everyday distro, is because of the package manager they use. I have had a couple of issues such as my earphone socket stopped working recently, and I am quite sure it’s not the socket. Plus I could not update to version 5 of LibreOffice, until I came across a helpful guide. Other than that, it’s been a stable OS and a joy to use with little to no annoyances. My battery life has also improved
Sabayon is a fine distro and believe it or not 7 out of 10 is a good score.
encourages people to try something new. I’m biased as I use Mageia for my day-to-day desktop, but I do find your main objection to the package manager as really odd, as they’re not used that much on desktops. Where I will agree is that Mint remains an exemplary distro for desktop users. It could be Ubuntu 16.04 and Unity 8 finally brings Ubuntu up to date to compete with modern desktops, but it’s the backend, server and cloud aspects that get all the attention for Ubuntu.
12 LXF206 January 2016
www.linuxformat.com
software freedom arena by joining various umbrella organisations, but their attempts to lock people into their proprietary software prison seems even more devious than before. As an analogy, Microsoft, post Ballmer, seemed to take on board a more environmentfriendly attitude to the conservation of the software habitat. Admittedly, they began by trying to re-arrange the habitat to their idea of an ecosystem, by eliminating “weeds” through the good old method of “harvest and burn”. But that failed to stop new “weeds” popping up - so they have begun the more insidious and potentially better method of planting brambleberry bushes offering easily accessible free fruit while the mother plant quietly began, through its rapid growth and painful thorns to cover, strangle and obliterate the diversity of other species that makes a habitat not only resilient but capable of evolving to deal with natural threats. I’m surprised Microsoft hasn’t attempted to buy the rights and all DVD copies of the “Little Shop of Horrors” because the similarity between Microsoft and “Audrey II” are frighteningly apparent. The article reminded me that we must not get too complacent about the burgeoning success of free software and software freedom we are attempting a cultural revolution against powerful organisations whose idea of freedom is, switching analogies, being able to walk the exercise yard of a high-security prison. Dr Colin R. Lloyd, via email
Mailserver Neil says You can’t actually blame Microsoft, it’s just a product of the corporate world. you can blame legislators and governments for making its life easier, but the real problem Kuhn points to is a new generation of developers that don’t understand or perhaps care about software freedom. Open Source has become a means to an end, not a tool to free software. The price of freedom is eternal vigilance, so new generations that grew up with computers shipping with Windows, rather than developing things from scratch - need to be educated on why software freedom is essential, if we all want to continue enjoying free access to computers and the internet.
To the core
LXF199 included articles on how to build a Linux PC. My main PC which I built some years ago has an Intel i5-750 CPU @ 2.67 GHz, with four cores and no hyperthreading, running Xubuntu 14.04, mainly for commercial use with Thunderbird, Firefox, OpenOffice, GIMP, Inkscape and numerous utilities. I notice that the system monitor seldom shows CPU load over 25% - indicating that most of the time the computer is only using a single core running as fast as it can. This indicates to me that a cheap i3, or even dual core Pentium will run Linux as fast as an expensive i7 with hyperthreading giving 8 - 16 threads. Could you please consider running an article that explains how multiple cores are used in Linux, and what applications can use them to advantage.
Also, about 3 generations ago, Intel upgraded the GPU built into the CPU chip to allow faster ripping and rendering of media files using the GPU cores. Which Linux programs have been enhanced to use these hardware facilities. Basil Orr, New Zealand Neil says Interesting insights. It’s true day-to-day desktop use will not push even five-year old processors much at all. I have the exact same CPU in my desktop at work and the newer Intel Core i5 2500K at home and they never feel slow for desktop use. However, if you’re running video processing or modern games then it’s a different story. But yes, it can get a bit confusing as to what truly takes advantage of multiple-cores. One thing I can say is that HyperThreading is an automatic Intel technology that maximises hardware pipelines within the processor by creating virtual threads. As for the GPU technology you might be thinking of Intel Quick Sync, which I think is Windows-
only. We’re looking at video encoding next issue in LXF207 with Handbrake but I don’t think it supports Quick Sync.
Wide websites I decided to learn responsive Web design. Something I could not understand was the way that sites were very often designed so the maximum size was 960 pixels and for larger screens than 960 pixel design is pushed into the middle of the screen. I rejected this and design sites where the content would fill larger screens. When I tested it on my friends smart phone the gadget for viewing the website as a desktop did not work however what I wanted was a success with content filling the screen. Can I conclude that the 960 pixel design is all about making this fairly unimportant gadget work while sacrificing design on the majority of desktop screens. I’m grateful for the generosity of people who leave responsive HTML 5, CSS three and JavaScript on the web and the open source way in
Find out more about encoding and if it can be hardware accelerated.
We’re not sure we’d describe the LXF website as responsive…
which it is done so that things like the above can so easily be changed. Chris Shelton, via email Jonni says First of all no one’s stopping you from making a webpage that will render wider than 960px. But if you do that, without also catering to horizontally-pixelly-challenged then you’re categorically not doing responsive design. If you believe certain metrics then most web browsing nowadays is actually done on mobile, so alienating those users with huge layouts doesn’t really make any sense in the general case. Also just because we have high res screens doesn’t mean we have to fill them–reading long lines is not terribly easy, and most people find better use of screen real estate is to have their web browser only use one half of the display. The 960px limit is a little arbitrary, it originally came from targeting desktops at 1024x768, you need to chop a bit off to allow for scrollbars and wotnot, but 960 also has some nice numbertheoretic properties due to being divisible by 2,3,4,5 and 6 [base12 FTW! – Ed], so it lends itself to equi-column layouts. LXF
Write to us Do you have a burning Linuxrelated issue you want to discuss? Want to let us know what issue made you throw you gaming laptop out the window or just want to suggest future content? Write to us at Linux Format, Future Publishing, Quay House, The Ambury, Bath, BA1 1UA or
[email protected].
www.techradar.com/pro
January 2016 LXF206 13
YOUR GUIDE TO THE BEST TECH THIS CHRISTMAS
ONLINE • PRINT • TABLET
LIFE’S BETTER WITH T3 t3.com
Linux user groups
United Linux!
The intrepid Les Pounder brings you the latest community and LUG news.
Find and join a LUG Blackpool Makerspace Blackpool Makerspace, 64 Tyldesley Road, 10am every Saturday. https://blackpoolmakerspace.wordpress.com Bristol and Bath LUG Meet on the fourth Saturday of each month at the Knights Templar (near Temple Meads Station) at 12:30pm until 4pm. www.bristol.lug.org.uk
Coding Evening Help teachers learn computing and have have a beer. Various locations across the UK. www.codingevening.org
Egham Raspberry Jam Gartner UK HQ, Quarterly. Next one January 2016. Date to be finalised http://winkleink.blogspot.com Lincoln LUG Meet on the third Wednesday of the month at 7:00pm, Lincoln Bowl, Washingborough Road, Lincoln, LN4 1EF. www.lincoln.lug.org.uk
Liverpool LUG Meet on the first Wednesday of the month from 7pm onwards at DoES Liverpool, Gostins Building, Hanover Street. http://liv.lug.org.uk/wiki Manchester Hackspace Open night every Wednesday at their space at 42 Edge St, in the Northern Quarter of Manchester. http://hacman.org.uk
Surrey & Hampshire Hackspace Meet weekly each Thursday from 6:30pm at Games Galaxy in Farnborough. www.sh-hackspace.org.uk
Linux is people We reflect on a good year for open source.
W
ith the new year soon upon data to create music and art, the history us we naturally look back of the Open Rights Group and why it over the year and celebrate was founded, and showcased open the events that entertained, informed source tech in the medical community. and expanded the community. Events In October we had the seventh are the lifeblood that keep all of the OggCamp, [see page 44] again hosted many communities active. Speaking in Liverpool. The event continues to face to face with your peers and draw new and interesting projects, working together enables projects to groups and individuals for a fun improve and innovate; the old adage of weekend of tech, talks and socialising. ‘many hands makes light work’ is But what can we look forward to in especially true for many projects. 2016? Of course we have events such At the start of 2015 we had the as Fosdem, SCALE14X and there will be Raspberry Pi’s third birthday many more Raspberry Jam events. celebration in Cambridge. Taking place Whatever 2016 may bring the many over two days this event celebrated the Linux communities will continue to success story that is the Raspberry Pi inspire, inform and help others to learn and its impact on many communities. more about Linux. LXF The Raspberry Pi has now sold 7 million units, and this exposes Linux to a large percentage of those users. In June we saw the return of the popular Opentech conference. Taking place at the ULU building in London. This one-day unconference investigated the many facets of open source and The Raspberry Pi birthday party attracted fans from across the globe and had a great vibe. open data. The event used
Community events news FLOSS UK Spring Conference FLOSS UK is the new name for UKUUG (which stands for UK Unix User Group and sounds like a UKIP voter falling off a cliff). This is the longest running system/network administrator conference in the UK. The conference has a long tradition for insightful workshops and talks and excellent lightning talks. The event takes place at Mary Ward House in London on March 15 -17. More details here: http://bit.ly/FLOSSUK2016
Internet of Things Conference Not a day goes by without some new piece of IoT (Internet of Things) hardware claiming to solve all of our problems and it’s a constant struggle to understand which platform will deliver the most benefit. But where can you learn more about the subject and its impact on our future? The answer is Munich, Germany between 14-17 March 2016. This IoT conference features over 90 talks and workshops for hackers of all
abilities. It’s an expensive but interesting conference for those that want to hack their own IoT and industrial automation projects. More information can be found on the website. https://iotcon.de Huddersfield Raspberry Jam Yorkshire has its own monthly Jam which takes place in the Sound and Vision Library at Huddersfield Library. At the Jam you can bring along your projects, work with like-minded
www.techradar.com/pro
hackers and help others to learn more about the Pi. The event is open to both children and adults, but children under 16 must be accompanied by an adult. You can find out more on the website. http://bit.ly/HuddRaspJam
January 2016 LXF206 15
The home of technology techradar.com
All the latest software and hardware reviewed and rated by our experts
Moto 360 2nd gen Lily Prasuethsut thinks Moto’s latest smartwatch is a thing of beauty, but is it more than just a pretty face? Specs... Display: 42mm: 1.37-inch (35mm), 263ppi (360x325) 46mm: 1.56-inch (40mm), 233ppi (360x330) Case: 46x11.4mm, 42x11.4mm CPU: Qualcomm Snapdragon 400, 1.2 GHz quad-core GPU: Adreno 305 450MHz Sensors: Accelerometer, Ambient Light Sensor, Gyroscope, Vibration/Haptics engine Battery: 42mm: 300mAh, 46mm: 400mAh RAM: 512MB Storage: 4GB Comms: Bluetooth 4.0,Wi-Fi 802.11 b/g, microphone
T
he first Moto 360 has been one of the most popular – if not the most popular – smartwatches on the market. It’s the top Android Wear watch that can go head to head with the Apple Watch, and then some. It definitely has annoyances – like its flat tire-looking display and middling battery life – but in terms of design, comfort and overall functionality, the watch has done well. The new Moto 360 screen has a higher pixel density than last year’s version, which extends to all the new size variations. The 46mm model is 1.56-inches has a 360x330 resolution and 233ppi. The 42mm version is 1.37-inches, has a resolution of 360x325 and a ppi of 263. These come with either a 22mm or 20mm band respectively and are available in a range of designs – and without doubt it’s the most comfortable smartwatch that we’ve worn. Where the first Moto was a bit slow and had some performance issues, the Moto 360 2 has surpassed it. The biggest change comes with the processor. It’s now a Qualcomm Snapdragon 400 chip with 1.2GHz quad-core CPU and an Adreno 305 450MHz GPU. It’s clearly faster and more responsive, comes with 512MB of RAM, 4GB of internal storage and is waterproof with an IP67 rating, or up to one meter submerged for 30 minutes. The operating system is Android Wear 5.1.1 out of the box, and notification and information cards pop up vertically from the bottom of the screen. You can flick through the available cards, and swiping from left to right will remove a
card from the list. Moving your finger in the opposite direction will take you to more options. With the new OS, you can even flick your wrist to scroll through without having to lift a finger. Swiping left over the watch face’s edge will take you to the apps drawer. Your most recent app will be at the top of the scrolling list. Swiping left again brings you to a contacts page from which you can send and read messages. Another swipe in that direction takes you to Google-specific commands, like taking notes or vocalising reminders and setting alarms. You can also draw out emojis to save or send out via text or email.
Smart and stylish
There’s Wi-Fi connectivity from the getgo, emoji messages, calls from the watch. Though, unlike Apple Watch, which can accept calls right on the watch, Moto 360 directs calls made and received to your phone. All important battery life is at least a day. Using it both with an iPhone and Samsung Galaxy it managed a day and a half use, extending to two days with more sparing use. Charge time takes about 35 minutes on the Qi stand, or a bit closer to 45 if it’s completely dead as a doornail. The new Moto 360 certainly wins as the best-looking smartwatch that came out in 2015. It’s also the most comfortable we’ve slapped on our wrists since our days of wearing analog watches. The compatibility is a sore point for iPhone owners since they won’t be able to fully use the Moto 360’s features, the watch is still primarily a notification machine for
www.techradar.com/pro
The black ledge remains but it still looks beautiful to us.
iPhone users, but who cares about those guys, right? On Android phones, the 360 does a decent job as a secondary tech gadget and the new Moto 360 is as stylish as you can get for a wearable. LXF
Verdict Moto 360 2nd Gen Developer: Motorola Web: http://bit.ly/Moto360_2 Price: From £220
Features Performance Ease of use Value
7/10 8/10 9/10 7/10
If you loved the first Moto 360, you’ll love the new model’s major design improvements though there are minor added features.
Rating 8/10
January 2016 LXF206 17
Reviews Smartphone
Google Nexus 6P Matt Swider takes the Google phablet for a run and consider if there’s a better Android device in the world? Specs... OS: Android 6.0 Display: Amoled 5.7-inch, 1,440x2,560, Corning Gorilla Glass 4 CPU: Qualcomm Snapdragon 810 v2.1, 2.0GHz octacore 64-bit GPU: Adreno 430 RAM: 3GB Storage: 32GB, 64/128GB options Camera: 12.3 MP2; 1.55 μm; f/2.0, front 8MP Battery: 3,450mAh Comms: Wi-Fi 802.11a/b/g/n/ac dual-band,A-GPS, Bluetooth 4.2 LE, NFC, USB Type-C Sensors: Fingerprint, accelerometer, gyro, proximity, compass, barometer Size: 159.3x77.8 x7.3mm Weight: 178g
T
he Nexus 6P is Google’s flagship Android phablet for 2015/16, but with a 5.7-inch display and cheaper price it won’t stretch your hand or your wallet quite as far as the 2014 Nexus 6. The ‘P’ in the Nexus 6P’s name stands for ‘Premium’, thanks to its all-metal unibody design that’s meant to rival the aluminum iPhone 6S Plus and glass-and-metal infused Samsung Galaxy Note 5. It’s the bigger and more sophisticated-looking version of the Nexus 5X. Huawei built the Nexus 6P to be different to any other Googlecommissioned phone. Although relatively flat around the back with barely tapered edges, it feels comfortable in one hand, yet it still takes two hands to operate it properly. Clearly, it was hard to fit everything in as the 12.3MP camera creates an unsightly rear bulge with a black strip, but this eyesore is a fair trade-off given the better low light photos. The Nexus 6P challenges the Samsung Galaxy Note 5 with a 5.7-inch display and quad HD resolution, keeping pace with its fellow Android juggernauts. The screen has a 2,560x1,440 resolution with a dense 518 pixels per inch, and, all around it looks brighter and more colourful than the 2014 Nexus 6. Google’s Nexus Imprint Sensor is introduced in the Nexus 6P and Nexus 5X, and it works a lot like other biometric fingerprint sensors with a key difference: registering a new fingerprint
Features at a glance
Marshmallows, mmm
Fingerprints
Comes with Android 6 which is faster and has improved power features like Doze and App standby.
No not all over it, but on a better biometric sensor, so it’s much easier to unlock and authorise yourself.
18 LXF206 January 2016
takes no longer than eight seconds, whereas Apple and Samsung’s methods require too many long presses and pauses. The Nexus 6P makes the jump to charging and transferring data via USBC. The advantage is clear: USB-C offers faster charging times, and the connector is reversible. Google at least made the transition easier; the Nexus 6P comes with a USB-C-to-USB cable. It harnesses the power of the Snapdragon 810 v2.1, which doesn’t run as slow or hot under pressure as the Snapdragon 810 when it debuted in the LG G Flex 2. The Qualcomm’s 64-bit, octa-core processor also combines a faster 2.0GHz quad-core chip and a slower, but more energy efficient, 1.55GHz quad-core one. The results finally strike the right balance. Saving even more power, the Nexus 6P includes what Google calls the Android Sensor Hub, a dedicated motion chip that drives all sensors on the phone. This leaves the core processing unit more bandwidth (and thus power) to run the OS.
Faster, better model
There’s an Adreno 430 GPU embedded into this System on a Chip, or SoC, too and, more importantly, 3GB of RAM. The hardware is fit for multitasking through a whole bunch of apps without much slowdown. Running Geekbench 3 sees a score of 4,073, which is much faster than the HTC One M9 (3,595) and LG G4 (3,499), but trails the iPhone 6S Plus (4,418), Samsung Galaxy S6 (4,975) and Note 5 (4,849). It’s exactly what we hoped for, given the souped-up specs, but bargain price compared to top-tier phones from Apple and Samsung. The camera, along with the Nexus 5X, is the best of any Nexus phone. What’s different here is that the 12.3MP Nexus 6P rear camera captures 1.55-micron pixels, which is larger than the normal 1.4 microns. Translation? Bigger pixels and more light captured. At 3,450mAh, the battery is bigger than most other phones we’ve reviewed. Google’s phablet lasts slightly longer than one day with heavy use.
www.linuxformat.com
The Nexus 6P is return of the brand to being affordable yet very capable.
What helps, if you’re not constantly turning on the display, are Google’s new software tricks: Doze mode and App Standby. They essentially put the phone into a semi-sleep mode. The Nexus 6P is a luxury phone without the premium to match. Behind its aluminium finish are powerful phone specs that nearly keep up with Apple and Samsung’s flagship phablets. It’s not as fast as the Samsung Galaxy Note 5 and Galaxy S6 Edge+, but this is the best phablet for the price, hands down, and returns the Nexus brand to its more affordable and usable roots. LXF
Verdict Google Nexus 6P Developer: Huawei Web: www.google.co.uk/nexus/6p Price: £449 (32GB)
Features Performance Ease of use Value
10/10 9/10 9/10 8/10
It’s easier to hold and easier on your wallet. New features, like a better camera, are great selling points.
Rating 9/10
Smartphone Reviews
Google Nexus 5X
Is this finally a replacement for the awesome Nexus 5? Matt Swider takes a look at the inexpensive and updated Android phone. Specs OS: Android 6.0 Display: IPS LCD 5.2-inch, 1,080x1,920, Corning Gorilla Glass 3 CPU: Qualcomm Snapdragon 808, dual-core 1.82GHz A57, quad-core 1.44GHz A53 GPU: Adreno 418 RAM: 2GB Storage: 16GB (32GB option) Camera: 12.3 MP2; 1.55 μm; f/2.0, front 5MP Battery: 2,700mAh Comms: Wi-Fi 802.11a/b/g/n/ac dual-band, Bluetooth 4.2, A-GPS, NFC, USB Type-C Sensors: Fingerprint, accelerometer, gyro, proximity, compass, barometer Size: 147x72.6x 7.9mm Weight: 136g
W
hat’s the sound of one hand clapping? It’s a Nexus 5X owner giving praise to Google and LG for remaking a palmfriendly Android phone while effortlessly holding it in the other hand. The Nexus 5X looks and feels like the Nexus 5 adapted for modern times. It’s lightweight and, with a 5.2-inch display, my fingers can just reach all the way across the screen. Little else has changed here. It’s uses the same IPS LCD screen technology and 1,920 x 1,080, and the resolution is now 432 pixel per inch. You will, however, notice five apps now fit across the screen instead of just four. The Nexus 5X inherits the Ambient Display setting of the Nexus 6. It wakes up the phone with a grayscale notification screen whenever the device is picked up or a notification arrives. The Nexus 5X and Nexus 6P (reviewed p18) introduce Google’s first fingerprint sensor, or what it calls the Nexus Imprint. Don’t let the fancy name fool you. It works like other phonebased biometric fingerprint sensors out there, except it’s on the back of the device right below the camera, not around front acting as the home button. The good news here is that the Nexus Imprint fingerprint sensor is fast, accurate and easy to set up. It took me a few seconds to register a finger and half a second for our phone to unlock. Juicing up the phone via the included Type C 15W (5V/3A) charger for just 10 minutes makes it last four
Features at a glance
Camera
Colours
Extra large 1.55-micron pixels do well in low light, so even with 12.3MP you get the most from them all.
Google is offering a bland selection of ice-cream like colours, but that’s what you get with plastic.
hours. We could also charge via other USB-C devices, like the Nexus 6. But itt’s a pain because your computer likely uses USB, and now you have yet another cable type lying around, until more devices appear later in 2016. The 5X uses a Snapdragon 808 processor with a 64-bit hexa-core CPU U that’s a combined 1.44GHz quad-core e chip and 1.82GHz dual-core chip. A matching Adreno 418 GPU is also integrated into this processor.
Future-proof?
The main disappointment with the phone is the 2GB of memory; the sam me as the old Nexus 5 model. We think 3G GB would have been more on spec for a phone of 2015 not 2013. It also continues to lack an SD slot too. Worryingly, Geekbench scores werre all over the place, it began well at 3,504 matching the LG G4. Run back to back k and speeds dropped to 3,025, then 2,439. In contrast, the LG G4 always stayed steady at around 3,500. This sputtering score carried through to realworld performance, such as a slow loading camera app or when running multiple apps. But the Android Sensorr Hub, combined with Android Marshmallow’s battery-saving Doze software tricks, is a bigger benefit down the line than the CPU drawbacks. Google proclaimed that the Nexus 5X (as well as the Nexus 6P) has the best camera it has ever put into a Nexus. That’s not saying much, given the very average Nexus 5 and Nexus 6 photos. Google says 80% of photos are taken in low-light and has, therefore, selected a sensor to suit with larger than normal pixels. The end result was that we found down the pub snaps turned out better than its competition. It’s also capable of 4K video at 30fps. The Nexus also contains a 2,700mAh battery, giving it a nice boost considering the 2,300mAh capacity of the Nexus 5 from two years ago. In realworld wear-down tests, we found that the Nexus 5X is able to go a full day because of Google’s software tricks like Doze mode and App standby. As long as you’re not expecting a multimedia powerhouse, it’ll perform just fine.
www.techradar.com/pro
The Nexus 5X looks and feels like a fresh take on the Nexus 5 with its larger display and better camera.
No, The Nexus 5X isn’t the best phone you can get, or even the best Nexus anymore due to the Nexus 6P being the bigger and faster of the two. It’s more like the perfect fit for one hand and the closest thing to a five-finger discount given the specs. LXF
Verdict Google Nexus 5X Developer: LG Web: http://bit.ly/Nexus_5x Price: £339 (16GB)
Features Performance Ease of use Value
8/10 6/10 9/10 8/10
Its fingerprint sensor and USB-C port take getting used to, but it’s a futureproofed phone aside from the memory.
Rating 8/10
January 2016 LXF206 19
Reviews Raspberry Pi
Raspberry Pi Zero
Les Pounder delves into another helping of Raspberry Pi. This time it has zero calories but does it still taste as sweet? Specs CPU: Broadcom 1GHz BCM2835 RAM: 512MB Storage: Micro SD card slot Ports: Mini HDMI (1080p 60), Micro USB for data and power Other features: Unpopulated 40 pin GPIO header, unpopulated composite video header Size: 65mm x 30mm x 5mm
T
his is the second Raspberry Pi released in 2015 and this time the Foundation shift their focus from power to price. Previous Raspberry Pi models have sold for around the $30 mark and improved the specifications with each release. But after a meeting with Google’s Eric Schmidt, Eben Upton changed the focus of the next Raspberry Pi to being cheap rather than all powerful. The Raspberry Pi Zero is a $5 computer. That’s not a typo, we can now buy a computer for the same price as lunch. The Pi Zero is closer in specification to the original Raspberry Pi using the original BCM2835 System on a Chip (SoC) with an ARM11 CPU clocked at 1GHz, and it offers 40% more power than the Pi 1. Micro SD card storage is present on the Pi Zero, but the usual push-click locking mechanism has been removed. Ports around the board are sparse with only micro USB for power and peripherals and a mini HDMI port for audio/video. The micro USB peripheral port requires the use of an adaptor as does the mini HDMI port, but as always, a number of retailers have already filled that gap. On Pi Zero you’ll find no DSI or CSI connectors, which means no compatibility with the official Pi touchscreen or camera. These connectors were taken off to reduce the cost of the Zero. Pi Zero features the, now standard, 40-pin GPIO (General
Benchmarks Test
Pi 2
B+
Zero
SunSpider (ms)
2,476
9,477
10,507
3D
499
1,657
1,672
Access
190
482
1,258
Crypto
194
647
837
Math
141
431
872
String
930
4,281
2,968
Sysbench
Pi 2
B+
Zero
Prime avg (ms)
29
50
35
Prime min (ms)
29
50
35
Prime max (ms)
54
85
103
20 LXF206 January 2016
The Raspberry Pi Zero is a small board but retains the 40-pin GPIO header for compatibility with the massive number of Raspberry Pi add-ons.
Purpose Input Output) but the header pins aren’t present, offering an opportunity to try out your soldering skills. We tested the GPIO with Python 3 and Scratch and can report that it worked exactly as expected. We also tested a typical add-on board, in this case the Unicorn HAT from Pimoroni, and that too worked after installation. So the Pi Zero is compatible with a large number of the add-on boards.
An IoT thang
We tested the Pi Zero with the latest version of Raspbian Jessie, updated just before the Pi Zero was released, and boot times were slower clocking in at 52 seconds from power on to desktop – this is comparable to the original Pi. So who are the target market for Pi Zero? The makers are one group who will benefit from a low-cost platform with an expansive user base. The Pi Zero is an embeddable platform that will fit well into an IoT (Internet of Things) project or any other permanent installation. While the Pi Zero doesn’t come with any Wi-Fi connectivity, it can be added relatively easily. In fact, there is already a hack to add a Wi-Fi dongle to the unused USB headers under the Zero. Another group to benefit from the Pi Zero are those who cannot afford a computer. With Pi Zero we reduce the cost to the bare minimum and offer a low point of entry for families to enjoy learning together. So why should you buy the Raspberry Pi Zero? If you love robotics,
www.linuxformat.com
weather projects and hardware hacking then the Pi Zero is an ideal platform for low cost experimentation. Embedding the Zero into a project is now just as cost effective as using boards, such as the ESP8266 and many of the Arduino clone boards. By removing some of the components and leaving a distilled Pi experience, we have a cheap, embeddable platform that can easily integrate into the Raspberry Pi add-on ecosystem. Being compatible with addon boards and using the same OS also enables access to the vast library of Raspberry Pi centric resources. The Raspberry Pi Zero now joins the family of boards and offers an exceptionally cost-effective first step into the world of computing, coding and electronics. LXF
Verdict Raspberry Pi Zero Developer: Raspberry Pi Foundation Web: www.raspberrypi.org Price: £4/$5
Features 7/10 Performance 5/10 Ease of use 8/10 Value 10/10 The Raspberry Pi Foundation has once again released a platform that will excite and invigorate a generation of coders. Thanks largely to a low price and massive community interest.
Rating 9/10
Raspberry Pi Reviews
Divide by zero
Les Pounder interviews Eben Upton about the Pi Zero’s journey and some good advice that helped pave the way for a $5 computer.
W
ith the surprise release of the new Raspberry Pi Zero, we sat down with Eben Upton, the head of Raspberry Pi Trading, to talk about its genesis and the journey that it has taken along the way. Linux Format: Why does the Pi Zero exist and how did it come to market? Eben Upton: There’s an element of ‘because we can’ and if you can build it, why wouldn’t you? The reason we made Pi Zero is because the Raspberry Pi is still too expensive, not for a lot of people, but there is a subset of people who are wondering if coding is for them and this is their ‘first toe in the water’. The idea was to provide something of a stepping stone to general computing at a low cost. If people like it then they will move on to Raspberry Pi 2 and use their Pi Zero for another project. When we first thought about making a low-cost device we first thought shall we make a Pi or make something with the Pi name but using a microcontroller? We very quickly backed away from that idea because we felt that it had to be a bona fide Raspberry Pi that runs Raspbian and has the GPIO which also enables users to be part of the community – and the community is the big thing about the Raspberry Pi. I now find that if I Google generic Linux questions I get answers related to the Pi, which is nice. We do see the Raspberry Pi as bringing Linux to a whole new generation and different type of person who wouldn’t of imagined using Linux. LXF: Initially your idea for the Pi Zero was to create a more powerful Pi? EU: Yeah, we started thinking about what was to become Pi 2 in 2013, and this was to be a more powerful board but after a chat with Google’s Eric Schmidt, where he said that “was a stupid idea and that you should try and make things which are less powerful and cheaper”, which was great advice. So we scrapped work on what would of been a Pi 2 in late 2013/early 2014. We chose to take a longer route to produce Pi 2 for the same price as the original Pi. This enabled us to work on another strand
EBEN UPTON ON LINUX
“We do see the Raspberry Pi as bringing Linux to a whole new generation.” which generated the $20 Raspberry Pi A+ and finally the Pi Zero at $5 LXF: Who is Pi Zero aimed at? EU: Our primary target are those that want a low-cost introduction to computing. But who else will buy it? Well, enthusiasts such as myself who will but it for IoT, robotics and embeddable home projects where you need something small and power efficient. LXF: There seems to be a bit of interest for a ‘cheap’ computer, with CHIP the $9 computer which is still
The Pi Zero offers a £4 computer to all.
some way off whereas Pi Zero is here already. EU: The Pi Zero isn’t a mean-spirited attempt to knock over other people’s business models; this is the best Pi that we could make at the lowest price. One of the things we are proudest of is that before Pi there just weren’t machines like this for less than $100. Now we have this enormous world of cheap Linux computers and we are never upset when another one is introduced by another business. LXF: PI Zero sports a 40-pin GPIO, so does that mean it’s compatible with existing add-ons? EU: 100%, if it fits on a previous model then it will work with Zero. We haven’t released a formal specification for future Zero add-on boards but third-party suppliers are already releasing boards with a pretty close specification. LXF: Now that we have a $5/£4 computer do you think that users will think about purchasing an add-on with a greater price? EU: Suppliers are already doing great things with smaller add-on boards, and because you can fit less stuff then they work out cheaper, but I suspect that add-ons priced around £10 to £15 will have a great market as people will want to but accessories for these devices. LXF
Credit photo: Raspberry Pi Foundation
www.techradar.com/pro
January 2016 LXF206 21
Reviews Distribution
Fedora 23 Workstation Cursing the Delhi weather and his misshapen head for denying opportunities to don his Fedora, Shashank Sharma tests the latest Fedora release instead. In brief... The latest release of the Red Hat sponsored but community driven distro is available in three editions. The Gnome-powered Workstation release is aimed at regular desktops. Each release features cutting-edge technologies, many updates and new features to improve the desktop experience. The distro is aimed at advanced and new users alike. See also: Korora, OpenSUSE, Mageia.
T
aking a cue from the past several releases, Fedora 23 missed its originally scheduled release date owing to errant bugs. The team of developers worked around the clock and managed to ship the distribution with only a week’s delay. And just like its previous releases, the final product is a mix of new essential features, useful updates to the look and feel and plenty to appease old Fedora users and appeal to new ones. In addition to the three editions which debuted after a major rejig, the usual array of spins, official releases favouring alternate desktops or aimed at special use-cases, such as the Games and Electronic Lab spins, are also available.
Plenty to like
With the latest release, the distro continues to shift to Wayland, which is scheduled to be the default display server in Fedora 24. For now, the distro offers a preview, which you can boot into by choosing the Wayland session when logging into your user account. Fedora 23 Workstation, which is aimed at hobbyists and home users, ships with Gnome 3.18 and brings with it an assortment of visual and practical updates. The Files application features a cleaner sidebar as many of the previously default locations have been relegated to the Other Locations tab. Mounted partitions and USB disks are relegated to this section instead of cluttering the sidebar. The application
Features at a glance
Firmware Updates
Google Drive support
Linux Vendor Firmware Service allows firmware to be installed from within the Software app.
The Files application allows you to access to your Google Drive as if it were a local filesystem.
22 LXF206 January 2016
Apart from the Gnome 3 Wayland preview, the distro also has a Gnome Classic session, which favours the old and celebrated look and feel of Gnome 2.
also now features a button in the header bar to showcase long-running tasks such as copy or move operations making progress dialog-boxes obsolete. For touch-enabled devices, long-press now provides access to context menus, a long-desired feature. With the last release, the distro switched to DNF as the default package management tool, and its possible to use this to upgrade to Fedora 23. The easy process even provides users the option to revert to the previous release, if required. At just shy of 1.5 GB, the 32- and 64-bit ISO images are chock full of applications and utilities to please just about everyone. For others, there’s the Software application, which provides access to enormous software repositories. The application which started as a clone of Ubuntu’s Software Centre has grown into a competent alternative tool. Its integration of the Linux Firmware Vendor Service makes it easy for manufacturers to push firmware updates to end users. This enables you to install updates to removable devices, such as monitors and printers, just as you would install any application. Speaking of applications, Fedora 23 ships with LibreOffice 5 which features many new features and improvements such as built-in image cropping, stylepreviews in the sidebar, improved import and export to a variety of file
www.linuxformat.com
formats etc. The default browser is Firefox 42 and in true cutting-edge style, Fedora 23 ships with Kernel 4.2, while several of its peers such as OpenSUSE continue with the 4.1.x series. Being a security conscious distro, the out of date SSL 3.0 protocol and RC4 cipher, which are prone to exploitation, are disabled by default in the encryption libraries. Although not voluminous in terms of visible updates or new features, Fedora 23 has had plenty of work under the hood like migration of core systems, such as Anaconda installer to Python 3. With yet another robust release from the Fedora camp, coupled with a nearperfect release from OpenSUSE, this is turning out to be a great and exciting time to be a Linux user. LXF
Verdict Fedora 23 Workstation Developer: Fedora Project Web: www.fedoraproject.org Licence: GPL and others
Features Performance Ease of use Documentation
8/10 9/10 9/10 9/10
A rock solid distro that’s a good match for newbies and advanced users, with plenty of new features.
Rating 9/10
Linux distribution Reviews
OpenSUSE 42.1
As the SUSE Linux Enterprise-based distro leaps forwards, the intrepid Shashank Sharma discovers whether it’s a gecko or a chameleon. In brief... One of the leading RPMbased desktop distributions. Its development is supported by a multinational corporation which uses OpenSUSE as a test bed for its Enterprise edition. With the latest release, OpenSUSE will terminate its 9-month release cycle and release based on the availability of SLE. See also: Fedora and Mageia.
T
he OpenSUSE project began consolidating and streamlining what it offers last year with the merger of the bleeding edge Tumbleweed release with the developmental Factory branch to create a rolling development codebase for the distribution (distro). Complementing that release is the project’s new line of stable releases dubbed Leap. The source code of SUSE Linux Enterprise (SLE) forms the basic building blocks for Leap and the distro will get bug fixes and security updates from the SLE releases. According to the release notes, the first version, 42.1, is based on the first service pack of SLE 12. Going forward, Leap 42.2 will be based on SP2 and 42.3 on SP3. The core focus of the new distro is stability. To this end, Leap uses thoroughly tested components vetted by SUSE developers who cater to the Enterprise customers. In this way, OpenSUSE Leap is to SUSE what the RHEL-derived CentOS is to Red Hat. The other big change, because of the shift to SLE packages, is the release cycle. In addition to the stable underpinnings, Leap now also adopts the release cycle of SLE. The plan is to have new major releases in sync with SLE releases and service packs. According to the release notes, the project expects Leap users to upgrade to the latest minor release within six months of its availability. This gets users 18 months of maintenance and security updates for every minor release. A
Features at a glance
Mature base
Install-only DVD
Based on stable and mature open-source components borrowed from the SLE release.
Unlike other desktop distros, OpenSUSE is available as an install-only DVD for 64-bit machines.
Leap’s version number is, yet another, shout out to The Hitchhiker’s Guide to the Galaxy’s answer to the ultimate question of Life, Universe and Everything.
major branch, like Leap 42, is expected to receive at least 36 months of updates which also eradicates the need for the OpenSUSE Evergreen branch. Due to Leap’s focus on stability, the distro is made up of mature packages that are a version or two behind the latest release.
A giant leap
By default the distro uses Btrfs as the filesystem for the root partition and XFS for the home directory. However, it’s best to use Btrfs for the entire filesystem as you can use the Snapper tool for managing snapshots of the filesystem. Apart from the default hourly snapshots, Snapper also creates snapshots before and after you make any changes to the system using Yast or the package manager. With Leap 42.1 you can boot straight into a snapshot. The Snapper tool has been integrated into the distro’s flagship Yast configuration tool to provide snapshots at filesystem level. Furthermore, Yast includes some new modules, such as Yast Docker for controlling the Docker daemon and managing containers. In line with its commitment to stability, the release ships with a 4.1 series LTS Linux kernel. This is also the first stable OpenSUSE release to ship with KDE Plasma 5 (5.4.2 to be exact). The distro uses apps from both KDE Applications 15.08 and 15.04 sets. Furthermore, the distro includes a KDE
www.techradar.com/pro
Frameworks 5 based version of the Dolphin file manager. The team of OpenSUSE KDE developers are still hashing out a plan on how to ship updated KDE Applications release through the life cycle of Leap 42.1. In another departure from tradition, the OpenSUSE Leap branch is only available as installable DVDs for 64-bit architectures. The developers felt that the installable Live CDs weren’t using the full potential of the Yast installer. OpenSUSE Leap is an interesting addition to the mix of distros. It isn’t as dated as Debian Stable and promises to deliver more stability than regular desktop distros and is much cleaner than CentOS. This makes it useful for all kinds of desktops and perhaps even server deployments. LXF
Verdict OpenSUSE 42.1 Developer: OpenSUSE Project Web: www.opensuse.org Licence: GPL and others
Features Performance Ease of use Documentation
9/10 9/10 8/10 8/10
A wonderfully visualised and flawlessly executed distro – a must-have for OpenSUSE users.
Rating 9/10 January 2016 LXF206 23
Reviews 3D printer
Ooznest Prusa i3
Alastair Jennings discovers the inner workings of the 3D print revolution with a self-build kit and flatly ignores the pile of leftover nuts and bolts… In brief… A modified version of the Prusa i3, which includes a Z-axis frame brace and LCD control board for quick and easy adjustments to printer settings. The kit comes as a Lego-like selfbuild, which takes between 5-10 hours to put together. See also: Lulzbot Mini.
F
orget the glossy finish of the latest 3D printers, the Prusa i3 is all about going back to basics and understanding where and how 3D print technology evolved. This Ooznest kit is based on the popular Prusa i3 design and along with the latest stream of FDM (Fused Deposition Modelling) printers, such as the Lulzbot Mini. The kit costs £475 and arrives in a large cardboard box, inside of which you’ll find a further selection of smaller boxes all carefully labelled and tagged with the contents like a huge Lego kit. Full instructions are available to all on the website (http://ooznest.co.uk), so you can see beforehand if the project is for you and what tools you need. Built around a metal framework and bolted together, the build quality really comes down to a simple matter of how well you can construct the thing. Starting with the frame, the kit features a good quality pre-machined metal Prusa i3 frame to which everything gets bolted to and at the heart of the printer is the RAMPS 1.4 control board which is the latest iteration of the board and a popular choice. This integrates the Arduino MEGA2560 board and stepper drivers and has a proven track record for reliability. The Hexagon all metal hotend and Bulldog Lite extruder are all really good quality parts especially at this price. This is a 12V setup that enables the use of 1.75mm filament through the Hexagon’s 0.4mm nozzle. Ooznest has made a few modifications to the standard Prusa i3
Features at a glance
LCD controls
Print head
A high-end feature for a self-build kit, which enables standalone printing from an SD card.
A 12V print head that does the job and can be upgraded as the Prusa i3 is a modular self-build.
24 LXF206 January 2016
Once built, calibrated and adjusted you’ll have a good quality 3D printer.
design when it comes to its kit. Some of these changes to design have been developed by the Ooznest team while others are popular modifications made by the Open Hardware community. These modifications make a big difference to the final print quality when compared with the original Prusa and include a Z-axis frame brace. An LCD control board finishes off the design and this enables you to quickly adjust printer settings including fine tuning the X, Y and Z-axis positions, heat of the hotend, fan speed, and – if an SD card is inserted into the side – direct card printing.
Budget self-build
The build will take you between five and ten hours depending on your skill, it took us six hours, after which it’s commissioning time. It takes around 20 minutes to go through the commissioning process with adjustments made using a computer and the Printrun software, and mechanically by tightening and loosening screws. Ooznest suggests using the Cura software by Ultimaker, which provides a solid and easy to use print interface, and, again, before use it needs to be calibrated for the Prusa i3 printer. In the first week the printer required a little retuning every couple of prints, but once the structure had settled down and all rogue bolts were fully tightened
www.linuxformat.com
print accuracy and reliability came close to 100%, and not too far behind that of the Lulzbot Mini. Print quality is also surprisingly good and matches that of the Lulzbot for simple structures and models. However, again, there’s a slight limitation with the finer filament and lower voltage when it comes to bridging and overhangs within structures. That said, a small upgrade and a little tinkering would easily enable quality prints that would rival printers twice the price. Ultimately, if you are looking to buy a 3D printer or already own one but want to know more about how they work then buy an Ooznest Prusa i3. The build process is thoroughly enjoyable and the amount you learn while making the thing is incredible. LXF
Verdict Ooznest Prusa i3 Developer: Ooznest Web: http://ooznest.co.uk Price: £475
Features Performance Ease of use Value
10/10 9/10 6/10 9/10
Once built the end result is a print quality that matches many other FDM printers that are twice the price.
Rating 9/10
ndup
Every month we compare tons of stuff so you don’t have to!
Photo managers
Quite the shutter nut, Mayank Sharma is looking for the perfect tool to whip all his terabytes of albums and images into shape.
How we tested... All applications, except Fotoxx and XnViewMP, were downloaded from the Linux Mint repositories (repos). Our main focus, among all their features, is each application’s cataloging functions. However, this isn’t a test of each application’s editing prowess and accuracy. All applications in the Roundup use the same pool of sample images. We’ve used captures from camera phones, consumer digital cameras and entry level DSLRs. There were images in common formats, including JPG, PNG, GIF, RAW formats from Canon (CR2), Nikon (NEF) and Panasonic (RW2) along with videos in MPG, MOV and 3GP. Special attention has been paid to usability. Those managers that supply many features without inundating the user score better than apps loaded with features in a cumbersome user interface.
W Our selection DigiKam Fotoxx KPhotoAlbum Shotwell XnViewMP
e don’t need to tell you how digital technology has transformed photography. Thanks to the proliferation of digital cameras we are consuming hard disks by the gigaloads and decent integrated cameras on smartphones have only compounded the problem. We live in a time where there’s always a camera with us at every event, and an efficient cataloging system ensures that any captured moments aren’t lost in the nooks and crannies of a hard disk. A photo manager helps bring order to the chaos and helps sift through the
26 LXF206 January 2016
“A photo manager helps bring order to the chaos and helps sift through the photos.” photos by adding tags and metadata. Some will even flag similar captures and help us identify the best version and discard the rest. A good photo manager should also have some editing capabilities that enable it to make corrections to the captures and remove common defects, such as red eye, tweak contrast, bring out details in the shadows and mellow highlights.
www.linuxformat.com
If you shoot RAW images you’ll also want a manager that’s able to import and process that format. Another aspect of capturing and processing images that needs to be addressed is support for services commonly used for sharing photos, such as social networks and image sharing websites. It’s become an important requirement for contemporary photo managers.
Photo managers Roundup
Organisation Are they effective sifters?
S
ince the primary purpose of a good photo manager is to manage images, each application in the Roundup should offer plenty of ways to visualise your album. DigiKam enables you to organise collections of photographs into directory-based albums. It can also filter images based on tags and other metadata. You can describe and organise images by specifying a title along with a caption as well as other information, such as a details about its rights and location etc. You can also use the application to tag faces and sort the images by assigning labels (Accepted, Rejected, Pending), colours and ratings. You can search the images based on various criteria and then save the searches for later perusal. One helpful feature is that, on initial launch, you can ask DigiKam to store the metadata information that has been assigned to the images in order to improve the application’s interoperability with other photo managers.
XnViewMP also boasts a wide variety of tagging features. You can add a rating and colour label to identify images based on their quality (Excellent, Average etc) or purpose (Work, Home, etc). You can categorise images in several predefined categories, such as photographs, drawings, videos etc and the application can sort images by all sorts of metadata (orientation, rating and print size etc). Then there’s Shotwell which automatically groups photos and videos by date and tags while importing them. It too has support for tags and ratings and enables you to flag images, add a title and a description to an image. Like DigiKam, Shotwell also allows a user to organise images with a search feature that offers quite complex search options and enables you to save the search criteria for quick retrieval later. Fotoxx also allows you to add metadata, dates, ratings, captions, comments and more to images that it
DigiKam ships with an experimental image quality sorter which when will automatically segregate images based on various defects, such as blur, noise and auto-exposure.
manages. The application has predefined tag categories to help you easily tag images. Like the others, the Fotoxx enables you to search images using any metadata as well as folder and file names. Tags and other metadata are known as annotations in KPhotoAlbum and these can be easily managed from the annotations window. You can add a label, date, time, rating and description to the images as well as add tags for people and places. As you move through the library tagging images, you can save time by copying tags from previously tagged images.
Verdict digiKam
+++++ Fotoxx
+++++
KPhotoAlbum
+++++ Shotwell
+++++
XnViewMP
+++++
No tool gets the edge since they all excel at sorting and cataloging.
Supported file formats What files can they read and display?
A
lmost all the managers in this Roundup can read and display the popular formats and work with RAW images from DSLRs. DigiKam can read and edit all types of images and while it can recognise and play video files in many formats,
including AVI, MPG and 3GP, it won’t allow you to edit them. However, DigiKam can work with RAW images thanks to the dcraw tool. In addition to regular images, Fotoxx can import RAW files in most formats and can edit with deep colour, but it
KPhotoAlbum can work with all types of images and is perhaps the best tool for organising gigantic collections of pics.
can’t read any video files. In a similar way, XnViewMP claims to support over 500 image formats, including all RAW formats, and displays animated GIFs, but it can’t open video files. KPhotoAlbum supports all the normal image formats and, thanks to dcraw, that includes RAW formats produced by most digital cameras and scanners. It renders RAW images faster than the others too, since it can use the thumbnails embedded in RAW images rather than waiting for the whole image to decode. Shotwell supports popular image formats but fails to pick up some such as GIF. It also supports video files in a format supported by the GStreamer media library, including Ogg, MP4 and AVI. Shotwell has limited RAW support. It doesn’t display the real thing but works with a JPEG file derived from the original. While importing RAW files, you can choose to either use the camera’s JPEG or Shotwell’s version.
www.techradar.com/pro
Verdict DigiKam
+++++ Fotoxx
+++++
KPhotoAlbum
+++++ Shotwell
+++++
XnViewMP
+++++
DigiKam and KPhotoAlbum win for their support for RAW images.
January 2016 LXF206 27
Roundup Photo managers
Performance and usability Do they play nicely?
D
on’t confuse the photo managers in this Roundup with your simple everyday image viewers. As you’ve seen, they pack in a lot of functionality and that can present a couple of issues. For starters, all the features and add-ons can slow
some of them down to a crawl. Handling images might not seem like a resourceintensive task, but categorising and editing photos, however minutely, can put considerable strain on your computer’s resources. Second, the more features an
application has, the more effort the developers have to put in to make it usable, eg an endless stream of menus and sub-menus isn’t very usable. Ultimately, we’re looking for a manager that has an intuitive interface as opposed to a feature-rich but cluttered one.
DigiKam +++++
This application uses a first-run assistant to help set up things like the location of the Photos folder and the location of the database file. Advanced users also get to decide whether RAW images should be opened via the RAW adjustment tool to manually adjust and correct the image before importing them. Each step in the wizard has a brief explanation to help users make the choice that suits their work flow. The app itself has a simple two-pane interface with the list of libraries on the left and the thumbnails of the images inside each library on the right. You can change the view and access the commonly used tools, such as the image editor and light table tool from the top panel. The interface is intelligent and also quite intuitively arranged, as it gives access to lots of tools without inundating the first-time user.
Fotoxx +++
On initial launch, Fotoxx fires up the Quick Start guide in the browser, along with a dialog box to index your image library. This, as we would expect, can take quite some time depending on the size of your image collection. The tool has quite an esoteric interface which isn’t as intuitive as the other managers that we tested in the Roundup. The options listed in the left-side panel change according to what you are viewing, eg when you’re viewing a gallery you get options to cycle and sort through the images and when viewing a particular image you get options to modify the image by adding metadata or edit the image. Fotoxx might look different but it still is very usable; all the features can be accessed from the relevant options that are clearly labelled in the leftside panel. The best thing about Fotoxx is that it can be navigated with the keyboard alone.
Plugins and add-ons What more can they do?
W
ho doesn’t like more bang for their buck? While these managers pack in a lot of features, some can take on more work thanks to optional plugins. The two managers that make the most of plugins are DigiKam and KPhotoAlbum. Both can be extended with plugins from the KIPI (KDE Image Plugins Interface) framework. The KIPI package ships with 40 plugins that add all kinds of abilities to the app, eg you can export images to flash, stitch them into a
28 LXF206 January 2016
panorama, blend bracketed images and stream them to a DLNA device etc. When you are done sorting and touching up your images, you can even publish them on the web or post them to friends straight from within the application. Thanks to the KIPI plugins, both DigiKam and KPhotoAlbum can export images to PicasaWeb, Flickr, Facebook, SmugMug and other websites. Similarly, Shotwell, which uses its own plugins framework, can publish images to various popular
www.linuxformat.com
websites and social networks. Shotwell ships with its plugins and many of these power features of the application, such as the F-Spot data importer, various slideshow transitions etc. Unlike the other applications, Fotoxx only uses its plugin framework to switch the editing duties to an external tool, such as Gimp or ImageMagick. XnViewMP too is different to the other applications in that it ships with just four plugins that allow it to read WebP, OpenEXR and other exotic formats.
Verdict DigiKam
+++++
KPhotoAlbum
+++++ Shotwell
+++++ Fotoxx
+++++
XnViewMP
+++++
KDE’s KIPI plugins package wins this round for the KDE applications.
Photo managers Roundup KPhotoAlbum +++
Unlike the other photo managers, KPhotoAlbum’s main interface contains a collection of icons. You can use these to sort through the images in your library, eg selecting the People option will list images according to an alphabetical list of people that are tagged in them. After you’ve selected an item from a category, you’re brought back to the main page of the browser, which will only display information about those images matching the item chosen in the category. The menu on the top of the main interface helps sort and catalogue the library by adding categories, sub-categories, labels, tokens and annotations. Annotations are tags and the manager has an extensive set of options to catalog images. In fact, the application itself recommends that users read the documentation for the annotation window when you bring it up.
Shotwell +++++
The Shotwell application imports photos when launched for the first time. It has the simplest interface of the lot and that’s probably down to the fact that it offers little functionality when compared with others in the Roundup. The manager is divided into two panes: the pane on the left helps you find images by separating them by date and tags and the thumbnails for the images that meet the criteria chosen in the left-panel are displayed on the right. All you need to do from this stage is doubleclick on an image to work on it. When you are viewing a particular photo, the bottom panel lists all the tools you can use, eg red eye reduction and automatic enhancement etc. Other minor options, such as ratings are accessed from the Photos menu in the top menu bar and there’s a dedicated tags menu.
XnViewMP ++++
This manager launches with a friendly startup wizard, but the main interface is rather convoluted and bombards the screen with about half a dozen panes. The pane on the left, for instance, displays the filesystem by default but once a library is organised, the view can be changed to filter images based on other criteria. Another pane displays thumbnail images along with some metadata, such as image resolution and time. You can rate images using icons on the thumbnail itself and categorise selected images with another pane. When you double-click an image it opens up in a new tab with a simplified interface. This has a list of icons for quick access to commonly used editing functions, such as rotate, crop and resize. Other advanced editing functions, such as level adjustment can be accessed from the Image menu..
Support and documentation
Verdict
Where do you go for help?
W
hile a simple image viewer wouldn’t need much documentation, the photo managers in this Roundup are all multifaceted. Some, such as DigiKam and KPhotoAlbum, have detailed documentation that you can read in various formats both online and offline. The DigiKam wiki has several usercontributed tutorials that cover a wide range of topics: from basic features to advanced image editing. The project doesn’t have a forum, but there is a
mailing list to get support for problems not covered in the docs or the FAQ. KPhotoAlbum also hosts a wiki and a mailing list. Besides textual documentation the project has several videos that introduce different features. While the videos are based on an older release, they do cover some essential functionality provided by the manager, such as facial recognition. Shotwell’s website offers brief usage documentation which is categorised by photo management tasks. Each
category further provides to-the-point guidance on various sub-topics. These are complemented by mailing lists and an IC channel but there’s no wiki or forum board. Fotoxx has a single-page website with a brief intro, along with example images and videos of the different features. Besides that the project doesn’t have a mailing list, forum or wiki. That’s still better than XnViewMP which supplies no documentation at all and uses a forum for all support queries.
www.techradar.com/pro
DigiKam
+++++ Shotwell
+++++
KPhotoAlbum
+++++ Fotoxx
+++++
XnViewMP
+++++
It’s a shame that both Fotoxx and XnViewMP have so little documentation.
January 2016 LXF206 29
Roundup Photo managers
Configurability How customisable are they?
W
hile all the managers can be used without tweaks, some adjustments will benefit your workflow. With DigiKam you can define the locations where the images are stored whether locally, removable or remotely. It also has detailed options to customise the look of the Album list and the Editor window. You can even
select the information that’s displayed in the tooltip for a highlighted image. DigiKam also enables you to create your own custom metadata templates with extensive details. Advanced users can switch database engines and migrate it to a remote location. While XnViewMP doesn’t offer as many options as DigiKam, it still
XnViewMP is loaded with features but you’ll need to tweak its interface and hash out a workflow to use it effectively.
enables you to customise the essential ones, eg you can define custom keyboard shortcuts for working with images and define mouse gestures for navigating through the library and the images. The application also enables you to customise certain aspects of the interface. XnViewMP goes to great lengths to help display the best thumbnail in the image browser window and allows you to customise the names of the predefined labels. In contrast, Shotwell only offers minimal customisation: you can change the colour of the image browser’s background; define the location of the image library and ask the application to keep an eye out for new image files. It also allows you to control how the imported images are organised and select external editors for editing normal images as well as RAW files. KPhotoAlbum offers little in addition to the ability to customise how and when the application searches for new images, and various aspects of the interface can be tweaked, such as the default thumbnail size etc.
Verdict DigiKam
+++++ Fotoxx
+++++
KPhotoAlbum
++++++ Shotwell
++++++ XnViewMP
+++++
Fotoxx is the only manager that offers no customisation.
Editing functions How good are they at tweaking images?
W
hile these tools primarily exist to help you organise your image library, having the ability to edit images will help save a trip to an external image editor. DigiKam, which is technically an ensemble of several tools, has one of the most elaborate image editors. You can make adjustments to an image’s colours, brightness and levels as you’d see in a proper image editor. There are also anti-vignetting and sharpening tools as well as automatic lens correction and red-eye removal etc. DigiKam also includes a large selection of effects and decorations. Fotoxx has a rich set of retouch and edit functions that go beyond changing brightness, contrast and colour. In fact, it has so many advanced image processing options it’s considered by many to be a viable alternative to Gimp. Like most dedicated image editors,
30 LXF206 January 2016
Fotoxx enables you to select an object or area within an image using various tools, such as freehand outline, follow edges and select matching tones etc. Another unique feature of the tool is that you can edit images without using layers. You can also create HDR and panoramic images, reduce noise, remove dust spots, create collages and mashups etc. There are also several artistic effects to help convert a photo into a line drawing, sketch, painting, cartoon, dot image and mosaic etc. Similarly, XnViewMP also has a bounty of editing tools that cater to both advanced and inexperienced users. You can use the application to manipulate colour, either manually or automatically, and transform the image using the several filters and effects. You can also preview and compare the changes before applying them to the original image.
www.linuxformat.com
Shotwell can add a punch to any image with a single click.
Shotwell isn’t as comprehensive as the other tools; it has editing functions but these are designed to quickly fix the image and don’t offer much control. Shotwell allows you to straighten, crop, eliminate red eye, adjust levels and color balance of the image. It also features an auto-enhance option that attempts to guess appropriate levels for the image and adjust accordingly. KPhotoAlbum is another manager that’s not designed for tweakers. In addition to the basic editing tools, such as rotate, you can only apply a couple of predefined filters, such as the monochrome or histogram equalisation.
Verdict DigiKam
+++++ Fotoxx
+++++
KPhotoAlbum
+++++ Shotwell
+++++
XnViewMP
+++++
Shotwell fixes images quickly, but doesn’t offer as much control as the others.
Photo managers Roundup Photo managers
The verdict W
e can find users for all the photo managers in this Roundup, and if you’re already using one of these you really don’t need to switch. That’s because there is very little to chose between them in terms of features. A couple are better for managing smaller collections, but, in our opinion, while others are more useful for sorting through larger ones, eg Shotwell works very well for casual image editing and sorting through small libraries. Alternatively, XnViewMP is as good a choice for editing images as it is for cataloguing them. However, it loses out because there’s quite a learning curve in using the application and it’s overkill for casual users. Another application that requires a mandatory flip through the user manual is KPhotoAlbum with its esoteric interface. On the plus side, it works well for sorting through and organising a large collection of images. Fotoxx also exposes too many functions in its interface. As with XnViewMP, there’s no lack of features but the interface is its Achilles heel.
1st DigiKam
In our view, DigiKam offers a nice balance of flexibility and automation and can easily handle a large image collection. As mentioned earlier, DigiKam is essentially a collection of tools and they all excel at what they do. The application can directly import images from your camera when you connect it via USB. It also offers some advanced search options that can find similar images and recognise any duplicate images. DigiKam also includes a very capable photo editor. There’s a whole host of image-editing options such as: levels adjustment; changing colour balance and saturation; photo restoration; removing blur; refocusing; red eye correction and a lot more. In fact, it can fix issues, such as red eye, colour and lighting with just a few clicks. The application also has a great set of extensions and can be used to stitch panoramic images together, create galleries and calendars etc.
4th XnViewMP
+++++
Takes time to customise which rules it out for the average user.
5th KPhotoAlbum
+++++
Web: http://yorba.org/shotwell Licence: GNU LGPL Version: 0.22 Works well for sorting through your stash and some casual editing.
+++++
+++++
Web: www.kphotoalbum.org Licence: GNU GPL Version: 4.5 An excellent cataloguer which takes some getting used to.
Over to you...
Web: www.kornelix.com/fotoxx Licence: GNU GPL Version: 15.11.1 Packs in a lot of functionality but let down by the menu system.
Thinking about switching to any of these photo managers or have you got another in mind? Write to us at
[email protected].
Also consider...
These days, most Linux distributions (distros) ship with a basic image viewer that can be used to read and edit metadata and tags, and will likely offer some basic image-editing options. If not, then take a look at your distro’s standard repository (repos), which will be full
+++++
Web: http://www.xnview.com/en Licence: Freeware Version: 0.76.1
Strikes a balance between form and functionality.
3rd Fotoxx
Finally, when you’ve polished and organised your albums, DigiKam will help you share your photos through email, IM and on popular image sharing websites and social networks. Our only real criticism is that advanced users will need to spend some time familiarising themselves with DigiKam’s features. But once you’ve got a hang of it, using DigiKam will be a pleasure irrespective of the size of your collection.
“DigiKam offers a nice balance of flexibility and automation and can handle a large image collection.”
Web: www.digikam.org Licence: GNU GPL Version: 4.14.2
2nd Shotwell
Although it’s designed for KDE, DigiKam looks comfortable in any desktop environment.
of helpful tools that can help you sort through all your Stallman snaps. Even basic image viewers, such as Geeqie and gThumb have organisational and editing capabilities. If you shoot a lot of RAW images you should perhaps look at RawTherapee and
www.techradar.com/pro
GTKRawGallery that are designed as per the workflow of a professional RAW developer. Finally, if you’re used to Google’s Picasa, which used to have a native Linux version, you can use the Windows freeware on top of Linux via the wonderful Wine. LXF
January 2016 LXF206 31
Subscribe to Choose your Print £51
For 12 months
Every issue comes with a 4GB DVD packed full of the hottest distros, apps, games and a lot more.
Get into Linux today!
Get into Linux today!
package
Digital £45
For 12 months
The cheapest way to get Linux Format. Instant access on your iPad, iPhone and Android device.
& On iOroSid! And
Bundle £62
SAVE 58%
For 12 months
Includes a DVD packed with the best new distros. Exclusive access to the Linux Format subscribersonly area – with 1,000s of DRM-free tutorials, features and reviews. Every new issue in print and on your iOS or Android device. Never miss an issue.
32 LXF206 January 2016
www.linuxformat.com
Get all the best in FOSS Every issue packed with features, tutorials and a dedicated Pi section.
Subscribe online today… myfavouritemagazines.co.uk/LINsubs
Prices and Savings quoted are compared to buying full-priced UK print and digital issues. You will receive 13 issues in a year. If you are dissatisfied in any way you can write to us or call us to cancel your subscription at any time and we will refund you for all undelivered issues. Prices correct at point of print. For full terms and conditions please visit: http://myfavm.ag/magterms. Offer ends 19/01/2016
www.techradar.com/pro
January 2016 LXF206 33
Smart home
HACK IT! Les Pounder takes us to a world where fridges can tweet, gardens can water themselves and something has an eye on your postman…
O
ur homes are being changed by technology and changing how we live. Many homes now have smart televisions, central heating systems that know when you arrive home and how warm you like each room in your house and even fridges that tweet reminders to buy more milk. But how easy is it to create your own piece of 21st century technology? Home automation is becoming a mainstream project for many hobby hackers, largely thanks to the rise of the Raspberry Pi, and in particular the release of the Pi 2, which is a powerful and cost-effective platform. The Pi’s GPIO (General Purpose Input Output) can interface with many common electronic components, such as sensors, relays and
transistors. All of which can be programmed using Python and other languages. The Pi also comes with an Ethernet connection which supplies a stable connection to the outside world and enables remote control of projects. We also have Raspbian Linux; a stable and secure
anyone build home automation projects using the Raspberry Pi? Well, in this feature we’ll dip a toe into the pool of possibilities with a series of projects that make the most of the Pi and use a series of off-theshelf components mixed with a little Python code and data that’s freely available from external sources. You’ll learn a lot of things if you work through each one: how sensors can be used to detect movement and trigger lights to turn on; how you can hack the humble doorbell so it has SMS functionality, add a sensor to detect when we have post and send a picture to us via email, so we never miss a package again. All of this is made possible thanks to a £30 computer, a few sensors and a little bit of Python magic.
“Dip a toe into the possibilities with a series of projects that make the most of the Pi.”
34 LXF206 January 2016
operating system that has a growing support base. Home automation encompasses many areas, eg environmental control, safety and security. But how easy is it to get started with automating parts of your home? Can
www.linuxformat.com
Smart home
Texting doorbell Get an SMS alert whenever a visitor drops by.
T
he humble doorbell is great for alerting us to visitors as long as we’re in earshot, but we could fix that with a little Internet of Things (IoT) knowhow. For this project, we’ve used a cheap wireless doorbell (found on Amazon for a fiver). We took apart the push button unit and found a circuit which uses a simple momentary switch powered by a 12V battery. The Raspberry Pi GPIO can’t directly work with voltages over 5V so we first need to change the power supply for something lower. You’ll need to solder two wires onto the battery contacts for the push button unit. When pressed, the momentary switch connects the power to ground and effectively drops the current, changing the state of the unit from on to off and creating a trigger. Using a multimeter, locate the correct pins for your unit and solder wires to them. For added strength use a hot-glue gun to keep the wires on the contacts. Attach the positive battery terminal to the 3V3 GPIO pin and the GND of the battery terminal to the GND of your Raspberry Pi. On your momentary switch attach the button to pin 17 (Broadcom
pin reference) and the other to the 3V3 GPIO pin. You will need to create a trial account (https://www.twilio.com) in order to send an SMS. Boot your Raspberry and navigate to the terminal and type the following to install the Twilio API for Python: $ sudo w pip3 install twilio . Open the Python 3 application via the Programming menu, create a new file and immediately save it as Doorbell-SMS.py. We start our project by importing the Twilio API, the time library and the GPIO library: from twilio.rest import TwilioRestClient import time import RPi.GPIO as GPIO Afterwards, we need to configure our GPIO to use the Broadcom pin-mapping, set up pin 17 as an input and set its built-in resistor to pull the current down: GPIO.setmode(GPIO.BCM) GPIO.setup(17, GPIO.IN, GPIO.PUD_DOWN) Next, we create a function that will handle sending a text message using the Twilio API. You will need to replace the For this project you will needaccount and token details with that of your own and change the to= and from_= telephone numbers to correspond with our requirements: def sendsms(): ACCOUNT_SID = “ACCOUNT ID“ AUTH_TOKEN = “AUTH TOKEN“ client = TwilioRestClient(ACCOUNT_SID, AUTH_TOKEN) message = client.messages.create( body=”Doorbell has been rung“, to=”NUMBER TO SMS”, from_=”YOUR TWILIO PHONE NUMBER”, ) print(message.sid) time.sleep(5) Our last section of code is a loop that will constantly go round. We look for the current on pin 17 to drop in the loop and when it does the function is called triggering an SMS being sent to your mobile: while True: GPIO.wait_for_edge(17, GPIO.FALLING) sendsms() Twilio is our bridge between the doorbell and SMS. It’s an Save your code and click on Run > Run Module to test. online SMS service that we can use via a Python library.
We purchased a doorbell unit for under £5 and used that as the basis of this wireless project.
You will need... Any Raspberry Pi but A+ is best. A wireless doorbell Soldering skills Twilio account The latest Raspbian OS All of the code can be found at https://github. com/lesp/LXFPiHome-SMSDoorbell.
External services Working with external data sources and services is an exciting area to explore with your Raspberry Pi. There are many different sources, such as weather, astronomical and mobile communications data. Data sources can be used as a method of input to trigger an event in the physical world, eg such as turning on a fan based on the current
temperature or a data source can be used as an output, eg such as an air pressure changes log. In this project we used the Twilio service to access SMS functionality through a Python API. Twilio is a cheap and robust service for projects and after the free trial ends it’s pretty cheap to use at $1 charge per month and around $0.04 per SMS. Using Twilio we can go further and
www.techradar.com/pro
turn our simple IoD (Internet of Doorbells) into a truly powerful device with MMS (Multimedia Messages), which contain video and pictures captured by the Raspberry Pi Camera. There are other SMS providers, one being www.smspi.co.uk, which itself uses a Pi to handle sending and receiving SMS messages and comes with 2,000 free SMS.
January 2016 LXF206 35
Smart home
Entry lights Welcome lights when you open your door. You will need... Any Raspberry Pi A+ B+ or Pi 2 The latest Raspbian OS Energenie power sockets and Pi Remote https:// energenie4u. co.uk A reed switch Jumper wires Magnets All of the code can be found at https://github. com/lesp/LXFPiHomeEntryLight
R
eturning home to a dark house in the winter is depressing so let’s use a few off-the-shelf components to build a bright welcome home project. First, we need to attach the Energenie to the first 26 pins of the GPIO on your powered-down Pi. (For reference, pin 1 is the pin nearest the SD card slot.) The board will fit neatly over the Pi with no parts hanging over. Now attach a female-to-female jumper cable to GPIO20 and GND through the unused GPIO pins. (If you want to extend the jumper cables simply use male-to-female cables until the desired length is reached.) On one end of the female jumper cable attach the reed switch and then the other. Using sticky backed plastic attach the switch to a door frame and attach magnets level to the switch but on the door itself so that the switch is closed when the door is closed. Boot your Pi and open a terminal. To install the Energenie library for Python 3 use $ sudo pip-3.2 install energenie . Once installed open a new Python 3 session via the Programming menu. To pair our Energenie units with our Pi open the IDLE shell and type from energenie import switch_ on, switch_off . Now plug in your Energenie and press the Green button for six seconds. This forces it to look for a new transmitter. Back in your IDLE shell, type switch_on(1) . This will pair your Pi to the unit and designate it ‘1’ and the process can be repeated for four units. With IDLE open click on File > New Window and save your work as entrylight.py. We’ll start by importing the libraries for this project: from energenie import switch_on, switch_off import time
The unit from Energenie fits neatly over the first 26 pins of the Pi 2 or over all the GPIO pins of an older Raspberry Pi.
The receiver in the Energenie unit houses a relay to switch the mains power on and off.
import RPi.GPIO as GPIO The energenie library controls the units for our lights, and time is used to control how long the units are powered for and RPi.GPIO is the library used for working with the GPIO. GPIO.setmode(GPIO.BCM) GPIO.setup(20, GPIO.IN, GPIO.PUD_UP) switch_off() Next, we set the GPIO to use the Broadcom pin mapping and set GPIO20 to be an input with its internal resistor pulled high, turning the current on to that pin. Finally, we turn off the Energenie units to make sure they are ready. The main code uses a try…except structure to wrap around an infinite loop: try: while True: if GPIO.input(20) == 1: switch_on() time.sleep(30) switch_off() Inside the loop we use a conditional statement to check if the input has been triggered, ie the door has been opened. If true then the units are switched on for 30 seconds and turned off again. else: switch_off() except KeyboardInterrupt: print(“EXIT”) switch_off() We finish the conditional statement with an else condition. This will turn the units off and loop continually. We close the try…except structure with a method to close the project, pressing CTRL+c will end the project and switch off the units should the need arise. With the code complete. Save your work and click on Run > Run Module to test the code.
Energenie Controlling high voltage devices is a project for those that know their stuff but with Energenie we can significantly reduce the risk. Energenie units at their core are simply 433MHz receivers that control a relay; a component that uses a low voltage to control a magnetic switch in a high voltage circuit. On the Raspberry Pi we have a transmitter which can instruct the receivers to turn on and off.
36 LXF206 January 2016
Energenie units are a safe way to control mains electricity. The standard Python library for Energenie is rather cumbersome, requiring the user to control the GPIO pins used by the transmitter in order to connect to each device and issue the correct instruction. This library has been made a lot simpler thanks to Ben Nuttal, a member of the Raspberry Pi Foundation’s Education team, and Amy Mather,
www.linuxformat.com
known to many as Mini Girl Geek a teenage hacker and maker. This improved library, which we’ve used in this tutorial, requires that we know the number of each unit and can issue an instruction to one or all units at once. The library can be found on GitHub, should you wish to inspect the code and learn more about how it works. See https://github.com/ RPi-Distro/python-energenie.
Smart home
Postie watch Email a snap of your special deliveries.
A
re you always backing kickstarters but never at home to receive your rewards when the postman comes? Well, this project can alert you to a parcel via email. With your Raspberry Pi turned off, attach the camera by the camera slot located near the Ethernet port. Next, connect your Passive Infra-Red (PIR) sensor to the following GPIO pins of your Pi. Please note: we are using the Broadcom pin mapping. PIR PIN
GPIO PIN
VCC
5V
OUT
17
GND
GND
Boot your Pi and use the configuration tool located in the Preferences menu. Enable your camera and ensure that SSH login is enabled. Reboot and then open Python 3 from the Programming menu. Create a new file, save and call it emailer. py. We start our code by importing a series of libraries. (You can view the full list via our source code link, and it starts with from mail_settings import * .) These handle sending email, taking pictures using the camera and timing our project. One extra library is mail_settings. This is an external library written just for this project and used to store email usernames and passwords. We’ll be using the Broadcom pin mapping and need to set this before we proceed: GPIO.setmode(GPIO.BCM) global file PIR = 17 GPIO.setup(PIR, GPIO.IN) We now create two variables: the first is a global variable that we can use between functions and the second, called PIR, stores the number of the pin used for our sensor. We set up our PIR connected to GPIO17 as an input. Next, we create two functions: the first takes a picture with the camera: def takepic(): global file current_time = str(datetime.datetime.now()) current_time = current_time[0:19] with PiCamera() as camera: camera.resolution = (800, 600) camera.framerate = 24
camera.capture((current_time)+’.jpg’) takepic.file = ((current_time)+’.jpg’) With takepic() we capture the current time and date as the name of the file, and we store and slice the string stored in the variable using only the text that we need. Next, we capture an image and save it with that file name. Our second function handles the email: def email_send(to,file): current_time = str(datetime.datetime.now()) current_time = current_time[0:19] msg = MIMEMultipart() msg[‘Subject’] = ‘ALERT - AT ‘+current_time+’ THE POST HAS ARRIVED’ msg[‘From’] = email msg[‘To’] = to with open(takepic.file, ‘rb’) as pic: pic = MIMEImage(pic.read()) msg.attach(pic) server = smtplib.SMTP(‘smtp.gmail.com’,587) server.starttls() server.login(email,password) server.ehlo() server.send_message(msg) server.quit() We can reuse the same method to capture the date and time of the alert. We create a mixed content email with a subject that’s composed of an alert string featuring the time and date of alert. Who the email is from is generated by the customer mail_settings library. The recipient is passed as an argument in the function and our image is attached as a file to the email. A variable called server stores the location of our mail server, in this case a Gmail account. We open a secure connection to the server, login and then announce to the server that we’re there. We then send the message before closing the connection to the server. With the functions written, we use a while true loop to constantly check if the PIR sensor has been triggered. If this is the case a photo is taken, attached to an email and sent to the recipient. If the sensor isn’t triggered then the loop repeats. Now save your work and click on Run > Run Module to start.
You can house this project in any type of case, requiring only a clean line of sight to the letterbox.
You will need... Any Raspberry Pi A+ B+ or Pi 2 PIR Sensor Raspberry Pi Camera Wi-Fi dongle The latest version of Raspbian A Gmail account All of the code can be found at https://github. com/lesp/LXFPiHomePostWatch
Sensors Sensors are an exciting method of automatically generating input which can be used to trigger events based on movement, sound and light etc. The Raspberry Pi can be connected to many different types of sensor. In this project we use a simple, passive infra-red sensor to detect
movement. It operates by sending a current to the Pi when triggered. Another type of sensor that could be used is an ultrasonic sensor, which sends a pulse of ultrasonic sound to determine the distance of an object from the sensor. This is a great sensor but it requires a little mathematics in
order for it to work. Both the PIR and ultrasonic sensor can be picked up for less than £3 on eBay. The Pi can only directly use digital sensors as the GPIO doesn’t support analogue sensors, but for under £10 you can pick up an analogue to PIR sensors can add a new form digital converter (ADC) which will of input quickly and easily thanks bridge the gap. to their simple operation.
www.techradar.com/pro
January 2016 LXF206 37
Smart home
Home heating monitor Visualise your central heating. You will need... Any Raspberry Pi A+ B+ or Pi 2 The latest Raspbian OS A DS18B20 sensor (part of the Cam Jam EduKit 2) Breadboard Male to Female jumper cables 4.7kOhm resistor WI-FI dongle An www. initialstate.com account All of the code can be found at https://github. com/lesp/LXFPiHomeInitialState
F
or this project, we’ll dunk our heads into the Internet of Things (IoT). We’ll determine the temperature of our home using a cost-effective sensor and push that data to the cloud and use it to populate a graph. The sensor we’re using is a Dallas DS18B20. These can be picked up relatively cheaply, but an easy solution is to buy the CamJam EduKit 2 as it includes a waterproof Dallas DS18B20. Assemble the hardware and attach to your Pi as per the diagram (see right). Next, we set up the sensor and there’s a handy Cam Jam worksheet (http://bit.ly/CamJamTempWorksheet) for this. To proceed you’ll need an www.initialstate.com account and your API key, which you’ll find in your account settings. To install the Initial State streamer type: \curl -sSL https://get.initialstate.com/python -o - | sudo bash We start our code by importing libraries to work with the OS, time and to stream our data to the cloud: import os, glob, time from ISStreamer.Streamer import Streamer Next, we load the kernel modules for the sensor using modprobe , we wrap the Bash commands in an os.system() function for Python and tell our code where to find the file for storing the temperature data: os.system(‘modprobe w1-gpio’) os.system(‘modprobe w1-therm’) base_dir = ‘/sys/bus/w1/devices/’ device_folder = glob.glob(base_dir + ‘28*’)[0] device_file = device_folder + ‘/w1_slave’
Initial State can cope with multiple data inputs.
We attach the DS18B20 to a breadboard and supply power to its data pin via a 4.7kOhm resistor.
Next, we create a function to handle reading the contents of the file which stores the raw temperature data and stores the data as a variable. def read_temp_raw(): f = open(device_file, ‘r’) lines = f.readlines() f.close() return lines Now we read the data and process it into something more usable. We keep the information and strip the rest of the data before converting the data to a temperature. def read_temp(): lines = read_temp_raw() while lines[0].strip()[-3:] != ‘YES’: time.sleep(0.2) lines = read_temp_raw() equals_pos = lines[1].find(‘t=’) if equals_pos != -1: temp_string = lines[1][equals_pos+2:] temp_c = float(temp_string) / 1000.0 return temp_c Our last section is a loop that constantly checks the temperature, performs conversions and streams the data to Initial State every minute. while True: temp_c = read_temp() temp_f = temp_c * 9.0 /5.0 + 32.0 streamer.log(‘temperature (C)’, temp_c) streamer.log(‘temperature (F)’, temp_f) time.sleep(60) Save the code and click on Run > Run Module to start.
Initial state In this project we sent temperature data to the cloud using a service called Initial State. This service enables users to graph and manipulate data from multiple sources at once. We used the free tier in this tutorial, which retains our data for 24 hours before deleting it. There are other tiers which can retain data indefinitely for an unlimited number of sensors.
38 LXF206 January 2016
For our project we used one sensor input, a DS18B20, but thanks to the Raspberry Pi and its GPIO we can use many more sensors to gather data about our home, eg in another tutorial we used a reed switch. This too can be used with Initial State so that we can show data when doors are opened. So using this service we can interpret data about our home. Such things as
www.linuxformat.com
reed switches on windows; temperature sensors in rooms; a clamp on our electric meter and light sensors outside can be used to provide data on how energy efficient our home is and this data can be graphed for many months to show our usage over the seasons. This data can be used with a central heating system to control your home automatically using a humble Pi.
Smart home
Remote CCTV Keep tabs on your possessions or pets.
F
or this project we’ll create a remote monitor for tracking activity in a home. Before we begin, make sure that your webcam is plugged into your Pi. To update our system and install the webcam motion software, you’ll need to open XTerminal and type: $ sudo apt-get update && sudo apt-get install motion With motion installed let’s configure it with: $ sudo nano /etc/default/motion You’ll see start_motion_daemon=no change this to yes. Now press Ctrl+o to save and Ctrl+x to quit. Now we need to make a few changes to our motion.conf file. Open it with $ sudo nano /etc/motion/motion.conf . Ensure the following is correct before saving (Ctrl+o) and exiting (CtrlL+x) nano. daemon on width 640 height 480 framerate 100 stream_localhost off Reboot your Raspberry Pi before continuing. Now let’s test our stream. In a terminal type $ sudo service motion start . Now in a browser on another machine type in the IP address of your Raspberry Pi, you can find this in the terminal by typing hostname -I followed by :8081 so for example my IP address was 192.168.0.3:8081. You should now see a video stream in your browser. Now that we have the stream working let’s embed it into a live web page. To do this we will need to install Apache. In a terminal type $ sudo apt-get install apache2 -y . This will also create a new directory in /var/ called /www/ which we shall use to serve our pages. Open the text editor on your Pi. We will now write a few lines of HTML to build a simple web page.
Puppy/Baby Monitor ## I wonder what the dog/baby is up to?
You will need... Any Raspberry but best with Pi 2 The latest Raspbian OS An internet connection A compatible webcam
We can easily embed our stream into a webpage.
<script src=”http://strapdownjs.com/v/0.2/strapdown.js”> script> We start by declaring the document as a valid HTML document and give the page a title to identify it in our browser. Now we move to the where we use a framework called strapdown, which mixes markdown – a popular writing format – with Twitter’s bootstrap framework. In essence we can make a nice page rather quickly. We’re using the cyborg style as it’s dark and looks great on devices. To create a headline we use two hashes (#) and then type the contents of the headline. Next, we add an image whose source is the IP address of the webcam stream. To make sure the IP address matches that of your Pi we add :8081 at the end. We then instruct the browser to load a JavaScript file containing the strapdown functionality. Save your file as index.html to your home directory. Open a terminal and type the following to copy the file to our web server: $ sudo cp /home/pi/index.html /var/www/html/ Finally, we need to start our web server and restart the motion service. $ sudo service apache2 start $ sudo service motion restart Now visit your Raspberry Pi’s IP address – you no longer need to add :8081 to the end of the IP) – and you will now see a video stream from your Pi.
CCTV The Raspberry Pi has made many different types of projects possible and one that’s popular is CCTV. The official Pi Camera, along with the Pi offer a low cost, high quality and low-power project you can build quickly. In this project, we used motion to stream our webcam to a webpage, but motion can be used to search for motion and stream as well, eg
we can record a video stream to a local or cloud device which will be triggered by a burglar, baby or Jack Russell terrier. Add a Passive Infra Red (PIR) sensor to this code, such as the one used in our delivery watch project, and you have a powerful application that can alert you to incidents and record the evidence. Another great application to use with a webcam is
www.techradar.com/pro
Zoneminder (www.zoneminder.com) which also works with the Raspberry Pi. Using Zoneminder, you’ll be able to monitor multiple streams and set up zones which will trigger an alert, eg a zone drawn around a door frame would trigger if a person used the door, but the surrounding area wouldn’t be monitored for activity.
This project streams video over a network connection.
January 2016 LXF206 39
Smart home
Automatic plant waterer Water your plants when the weather forecast suggests no rain. You will need... Any Raspberry Pi A+ B+ or Pi 2 The latest Raspbian OS Piface Relay Plus A 12v Peristatic pump A 12v 1A power supply Barrel jack to screw terminal Aquarium airline Soldering skills Wi-Fi dongle An open weathermap.org account All of the code can be found at https://github. com/lesp/LXFPiHomeGardenManager
H
aving to dig out the ol’ watering can and potter about the garden might be some people’s idea of bliss, but it’s not very 21st century. Besides think of the time you can recoup for another hacking project. In our final Raspberry Pi project in this feature, we’re going to automate the whole watering of plants with a Pi linked to a weather forecast service and an add-on board that is connected to a pump. To kick off, we start by soldering connections to the terminals of our pump. These can be secured with a hot-glue gun or heat shrink. You’ll need to use more wire on the barrel jack screw terminals and make a note of which is plus (+) and minus (-). On the Piface Relay Plus, locate relay 3 and insert the GND (-) of your power into the COM terminal and also one of the pump connections. Locate the NO (Normally Open) terminal and insert both of the remaining wires. Next, you’ll need to attach the Piface Relay Plus board to your Pi and boot to the desktop. To install the software for your Piface board and use openweathermap with Python 3, open XTerminal and type: $ sudo apt-get update && sudo apt-get install python3pifacerelayplus $ sudo pip-3.2 install pyowm Open Python 3 IDLE via the programming menu and create a new file. Save your project as garden_manager.py. We start the code by importing the Piface, pyowm and time libraries with import pifacerelayplus, time, pyowm . Next, we create a variable called key and store our API key from http://openweathermap.org. We now need to create two functions: our first function
The Piface Relay Plus Board offers a number of relays that can be directly controlled via Python.
40 LXF206 January 2016
www.linuxformat.com
controls the pump attached to Piface. This function we’re calling pump and it takes one argument: how long it should water the garden. def pump(time): pfr = pifacerelayplus.PiFaceRelayPlus(pifacerelayplus. RELAY) pfr.relays[6].toggle() time.sleep(time) pfr.relays[6].toggle() We use a variable, pfr to shorten the function call for using a relay. Then we toggle the relay on and off, depending on its current state. We then pause using time.sleep() , permitting the water to flow, before we toggle the relay off. Our second function retrieves the weather forecast for the next 24 hours. It takes two arguments: our location and the number of days to forecast. We then create a variable to store our openweathermap API key, and a further two variables contain the output of the forecast functions. def forecast(x,y): owm = pyowm.OWM(key) fc = owm.daily_forecast(x,y) f = fc.get_forecast() We use a for loop to iterate over the forecast data. This feature comes into its own when used to forecast weather for multiple days: for weather in f: rain_forecast = str(weather.get_status()) Finally, the function uses an if…else statement to check the forecast. If rain isn’t forecast this information is printed to the shell before calling the pump() function. If rain is forecast,
Smart home
Working with high voltages In this project we used a 12V power supply to power a pump, but you may be asking why we had to use a relay? The Pi can’t tolerate voltages over 5V and to use anything above that would risk damaging the GPIO or the Pi itself. A relay is a magnetic switch triggered by a circuit connected to the Raspberry Pi. This circuit is 5V tolerant and when activated it enables a magnet which pulls a switch inside the relay closed.
There’s no connection between the Pi and high voltage circuit, which means we can safely control high voltages. We used the Piface Relay Plus board which comes with four relays attached. Alternatively, you could use a relay on a breadboard, but for safety reasons we would suggest only using a maximum voltage of 12V on the board, as anything more would require a more robust solution.
this information is printed to the shell before waiting 24 hours before checking again. if rain_forecast != “rain”: print(‘Rain is not forecast’) pump(300) time.sleep(86400) else: print(‘Rain is forecast’) time.sleep(86400)
Relays are not the only solution, a transistor can also be used to control higher voltages. Transistors work in a similar way to a relay, in that they isolate the high voltage circuit but are controlled by a lowpower circuit. Both relays and transistors are low-cost methods of controlling high voltage projects. Remember if you are unsure about a circuit, ask someone who does before applying power!
Finally, we need to create a loop that will call the forecast() function for Blackpool for the next 24 hours. Of course, you can change the location to where ever you live. while True: forecast(‘Blackpool,uk’,1) As usual, you will want to save your code at this point and click on Run > Run Module to test. For testing it would be prudent to reduce the time.sleep() duration to something much shorter.
Taking the Pi further A world of home automation awaits.
H
ome automation covers an extensive range of products, services and new concepts. If you’ve been bitten by the IoT bug and wish to expand your knowledge and dive into a home automation set up then here are a few areas for further research.
devices are their rather high cost, eg a simple smoke/carbon dioxide detector retails for £89. What you pay this high price for is convenience and all of the hacking has been done for you packaged in a sleek device.
Nest
This is a protocol for electronic devices that primarily use powerlines to send control and signalling. X10 has been around since 1975 and while it may not be the latest protocol it does have a large user base, thanks largely to a low cost of components. X10 also enjoys support for a range of boards, including the Arduino and Pi, which enables it be used to control appliances in your home in different ways.
Purchased by Google in early 2014 for $3.2billion, Nest is a big player in home automation. Its range of products started with a central heating control system that linked to mobile devices. Now its range covers smoke and carbon monoxide detectors and an IP camera. The main issue with these
X10
Wireless Things Open Pi
Nest has added a lot of style to central heating.
The Open Pi project uses the lesser-known member of the Pi family, Compute. This is model is a smaller SODIMM package that’s ready for embedding in a project and Open Pi places this, already slim module into a small plastic case. The Open Pi is designed to enable hackers to use it for various IoT projects, using a mix of Bluetooth Low Energy, an infra red receiver and an SRF shield, which offers long distance radio communications to devices equipped with an SRF. Devices that are available include: an Arduino-compatible board called Xino RF; an SRF GPIO add-on for the Pi and a USB stick for computers. The SRF and its higher-powered version ARF can transmit over distances far greater than standard wireless IoT devices.
www.techradar.com/pro
January 2016 LXF206 41
Smart home
Bluetooth LE
Bluetooth has been with us for many years but recently we’ve seen a new low energy version that offers a low-power short distance connection called Bluetooth LE. These have been made into beacons, like Estimote (http://estimote.com) that can be programmed to react to Bluetooth devices in inventive (or intrusive, depending on your viewpoint) ways, eg they will broadcast an open Bluetooth connection which can push data to your device. These can be used in a house setting to recognise when a user returns home and can interact with appliances via X10 to set up your home ready to relax. These beacons can be built using a Raspberry Pi and a Bluetooth LE dongle, enabling a low-cost and non-proprietary solution of your own.
Kore and Yatse
One of the most common Pi projects is a media centre, especially since the release of the Pi 2 in early 2015. Instead of using a wireless keyboard and mouse, why not use your
A Bluetooth LE can be used to push data from objects in your home as you wander from room to room.
Android phone to control your entertainment? Kore is the official remote control for Kodi and Yatse is a an unofficial yet powerful app. Both apps enable you to navigate your media collection using a well-designed and intuitive interface. Yatse also has a series of plugins to enable gesture control and push SMS messages to your TV.
Xbee
This is one of the easiest ways of automating your home. Xbee uses only four connections for power, ground and data connections, and any device can talk to Xbee via a serial link. Xbee has been used by a lot of early home automation hackers, who’ve used it to integrate Arduinos to make wireless devices that aren’t connected to the internet but still automated.
Particle
The Particle range of boards started life as the Spark Core, an Arduino-compatible board which featured built -in Wi-Fi for
Any device can talk to Xbee via serial link.
Creating mobile apps to control your home Creating your own interface for a project is quite an undertaking. For instance, you’ll need to consider what language, framework and protocol to use. But what is common among all these considerations is the likely need to control your home automation project using a mobile device. These devices have taken over our lives and it’s now just as common for a user to control their TV, music and lighting from their tablet or phone as it is for them to surf the web via such devices. So how can we control our home automation project with a mobile device? Build your own Android application Coding Android apps is an involved process that requires downloading the Android Studio SDK (Software Development Kit) and learning how to write apps using it. There is more information via their website https://developer.android.com/ training/index.html. A simpler way to write Android apps is to use MIT App Inventor. This uses a web-based development environment which can be used to design the layout and content of a project and code the project using a block-based interface
42 LXF206 January 2016
similar to Scratch. The interface, while looking simple and child-like, hides a powerful framework that has access to Google power. Our first project uses a speech to text application which uses Google’s servers to process your voice into text with ease. This project can be adapted to send an SMS to a dedicated number, such as Twilio which can then be pushed to a Pi controlling your home. This means that even from the office you can make sure the central heating is ready for your return home. You can learn more about MIT App Inventor at the official website http://appinventor.mit.edu/explore. Building a GUI for your Pi Recently, the Raspberry Pi Foundation has released its latest product, a seven-inch touchscreen for under £50. The Raspberry Pi attaches to the back of the screen and can be powered with just one power supply. Building a user interface for the touchscreen can be accomplished in Python using many methods and two common methods are: first, the Tkinter framework for creating menus and dialogs in a
www.linuxformat.com
similar manner to a traditional OS and, second, creating a custom interface using pygame, a library for media/video game creation, eg Spencer Organ used the pygame library to create a radio player with a custom user interface (http://bit.ly/PiInternetRadioPlayer). Flask Flask is a micro web development environment for Python that will slot into your project with relative ease and convert a project into a web app that will work with all devices using a browser. Flask bridges the gap between the web and your project by running a server on your Pi that intercepts input on a web page, eg a hyperlink or button, and it calls a Python function to perform an action. To illustrate here is the code to control an Energenie using Flask (see http://bit.ly/EnergenieFlask) created by Ben Nuttall from the Raspberry Pi Education team. from flask import Flask, render_template from energenie import switch_on, switch_off app = Flask(__name__) @app.route(‘/’) def index():
Smart home
The popular ESP8266 used as a cheap Wi-Fi capable board.
control and programming from a remote location. After a successful Kickstarter campaign the team produced a cheaper new version called Photon, which provides all of the functionality of the Spark Core at half the price. However, the latest board, called Electron, also offers connections over a 2G or 3G cellular network and enables data to be sent and received from isolated locations, eg an Electron could be placed inside a doorbell where it could send an SMS without any external SMS providers, so If you need to tweak your project from the beach, then you can and upload the code directly to the board at your home.
ESP8266
It’s no exaggeration to say that this board has changed home automation and IoT forever. The ESP8266 is a cheap Wi-Ficapable board that can be programmed using the Arduino IDE, which enables quick integration into any existing project. The ESP8266 has become the go-to board for home automation hackers on a budget, because there are
return render_template(‘index.html’) @app.route(‘/on/’) def on(): switch_on() return render_template(‘index.html’) @app.route(‘/off/’) def off(): switch_off() return render_template(‘index.html’) if __name__ == ‘__main__’: app.run(debug=True, host=’0.0.0.0’) Here we can see that we’re importing the Flask and Energenie libraries and creating an instance of the Flask class. Next, we use the route decorator to tell Flask what URL will trigger our functions. We go on and create three functions that will handle loading the index.html template and switching on and off the Energenie devices around the home. Last, we run the Flask app in debug mode, enable a verbose output to the Python shell and set the app to accept connections from all IP addresses. The Python code works with an HTML template that contains the layout and content of the web interface. CSS can also be used to style the web page.
development boards which both enable access to the GPIO and are programmable using the Lua scripting language and Micro Python.
Raspberry Pi Zero
Raspberry Pi Zero is a new £4 unit for creating smart connected home devices.
The Foundation has managed to do it again, introducing a £4 version of the Raspberry Pi with a lower-power, cut down Zero model (see p20 for the full review). This is going to make the Pi the go-to device when it comes to creating smart connected devices around the home. With power draw down to as low as 70mA while idle (without HDMI or other peripherals connected) it’s even practical to power the Pi Zero off AA batteries. This is still significantly more power than an Arduino device, but then the Pi is way more capable! The Pi Zero also supports the standard GPIO and Linux software which makes it easy to prototype designs on standard Pi boards, optimise them and then deploy on the Zero. At this point we hope you have enough ideas, starter projects and technical knowhow that you can turn your home into an automated nirvana. LXF
Build a smartphone app to control your smart home cat?
www.techradar.com/pro
January 2016 LXF206 43
OggCamp 2015
44 LXF206 January 2016
www.linuxformat.com
OggCamp 2015
Ogg m 2015 OggCamp Les Pounder takes us on a tour of the crown jewel event of the UK’s Linux and open source community.
O
ggCamp is much more than just an event; it’s where a community comes together for knowledge, entertainment and socialising. Founded in 2009 from the ashes of the popular LUG Radio Live event, OggCamp was formed by the podcasters behind Linux Outlaws and the Ubuntu Podcast. The Connaught Hotel in Wolverhampton first played host to the event and from this humble beginning OggCamp has travelled the UK but its heart is firmly in the city of Liverpool. For 2015, Oggcamp, once again, took place in Liverpool at the LJMU John Lennon Art and Design Building which has now hosted three Oggcamp events. ON Saturday morning saw a large crowd of free software enthusiasts arriving for the event and the event drew delegates from around the world with countries such as Ireland, Netherlands and the USA particularly well represented. OggCamp 2015 also saw the return of the Hardware Jam, an event that originally appeared in OggCamp 2012, which was one of the first places to buy a Raspberry Pi – when it first launched – without waiting six weeks for delivery, thanks to Pete Lomas, co-founder of the Raspberry Pi Foundation. In 2015, the Hardware Jam had Raspberry Pi Minecraft sessions and a robot hack session using Arduino, which were all presented by Mark Feltham. David Ames and Sarah Zama also led classes for people eager to learn about hardware hacking. OggCamp veteran Ken Boak was also back and worked with children to create a series of hacks based on toys. Exhibitors are a key part of OggCamp and this year we saw the return of Ragworm, the
community PCB manufacturers and the Hacker Public Radio podcasting team. New exhibitors were Entroware, a company based in Liverpool that produce a range of Ubuntu computers, and a local radio enthusiasts group who ran a two-day radio induction course, which covered the basics in transmitting and receiving long distance radio signals. As usual, the unconference element of OggCamp brought out the most interesting talks with notable speakers, such as Ben Nuttall from the Raspberry Pi Foundation talking about GPIO Zero, a new, easier framework for Python projects, and Alan Pope
THE HIGHLIGHTS
Jam along with members of the Liverpool art community who designed the signage and merchandise for OggCamp. This year there was no live podcast recording, signifying the shift of OggCamp organisation from the podcasters to the community. Instead, we had a live panel hosted by Joe Ressington, who hosts a number of Linux and open-source podcasts. Ressington was joined by podcasters from the community who answered questions from the audience in a similar format to Question Time but with more Linux. The OggCamp raffle was well attended with everyone eyeing up a laptop from Entroware. The raffle is an important part of Oggcamp as it generates money for keeping the event running (along with the excellent work of ‘Team Merch’ who come up with ingenious merchandise every year for attendees to buy). At each Oggcamp there are a team of unsung heroes who battle behind the scenes to ensure that the community has a great weekend. The Oggcamp crew return year on year to help the event run smoothly and each bring their own identity to the OggCamp event: the friendly smile that welcomes the newcomer to the community; the person who ensures the projector always works; these people are members of the Oggcamp crew and they make everything happen because they love the community. OggCamp 2015 was another successful event and a feather in the cap for all involved. Note: Les Pounder is the ‘Oggcamp Chief’ but after five years of wearing the shiny hat he’s decided to step down with hope that a member of the excellent OggCamp community picks up the reins.
“The unconference at OggCamp brought out the most interesting talks.” explaining how the Ubuntu phone app store was ‘owned’ recently. This year we heard Laura, aged 9, ask the community what she should be learning in school. A brave and topical question. Laura reflects the rise of maker skills in our schools, children are now ‘digital leaders’ helping their peers to grasp new technology. A community has grown around OggCamp and the event organisers began actively working with it back 2013. This year, OggCamp turned to the community to help augment the organising team, which had been depleted because of other commitments and ill health. The community responded and the organising team behind the recent Liverpool Makefest, Mark Feltham and Caroline Feltham-Keep joined the team to run the successful Hardware
www.techradar.com/pro
January 2015 LXF206 45
OggCamp 2015
DR ANDREW ROBINSON: CODEBUG Linux Format: Can you tell the readers more about yourself? Andrew Robinson: I’m Dr Andrew Robinson, Honorary Research Fellow at the University of Manchester and I also run a start-up company which was behind the Codebug project.
Interview
LXF: So what’s Codebug? AR: Codebug is a cute wearable microcontroller board and the idea is that a beginner can set up and program it in less than a minute. Codebug uses a Pic micro-controller and has a series of 25 red LED creating a matrix. Along the edge, we have connections that enable users to connect using crocodile clips to components such as LED etc. Programming your Codebug is handled via a web interface and requires no software installation. Everything is handled via the web interface. Code is compiled online and then downloaded to the user’s computer. From there they can plug in their Codebug, which will appear as a USB drive, and copy the code across. The code will then be ready for use. LXF: We already have many different devices, why do we need Codebug? AR: To answer that question let me tell you where Codebug came from. A few years ago I was hosting Raspberry Pi workshops with teachers. They had great enthusiasm and wanted to get into it, but what we found from these workshops was that it took some time for them to achieve something exciting, such as turning on an LED. It felt like we were losing them along the way as there were no quick rewards to keep their interest. As an engineer, I looked at the problem and the set up of supporting equipment was an issue. So I looked for the lowest common denominators and they were access to the web and USB. Using a web interface we can program
46 LXF206 January 2016
Codebug and via USB we can transfer the code across. Codebug comes with ready to go and basic projects, such as name badges and animations, that can be created in under a minute. Codebug is a gentle introduction, an entry point, to physical computing and will hopefully inspire people to go further with boards such as the Raspberry Pi.
ON WHY CODEBUG EXISTS
LXF: So here we are at Oggcamp, how did you find out about this event? AR: I read about the event via a feature in a magazine [what are those? –Ed] a few years ago. It seemed like a vibrant, interesting place where like-minded people can meet and exchange ideas. It seemed like the place to see what the community were making. LXF: Has OggCamp lived up to your expectations? AR: I am really pleased with OggCamp. Right now I am really busy with Codebug so taking a ‘weekend off’ to visit an event is a big commitment of time. But what I found at OggCamp was that everyone had a level of care and respect for each other’s projects and ideas. The community is interested in each others work and made time to listen and that was evident during the weekend. Everyone was sharing ideas and wanting to be involved in them. The whole event has a welcoming feel.
“Codebug is a gentle introduction, an entry point, to physical computing.” LXF: Codebug is primarily aimed at children; what resources are there to help them get to grips with Codebug? AR: Integrated into the Codebug website is a whole range of step-by-step tutorials and sample projects and these are all based around physical activities, not dry exercises in logic. LXF: Projects like Codebug and Raspberry Pi are bringing Maker culture into the mainstream – is that a good thing? AR: Absolutely, one of the purposes of Codebug was to democratise the Internet of Things (IoT), in a similar way to how Wordpress has enabled anyone to build a website. Ten years ago it was the real hackers who could build their own website, now anyone can build a site with extended functionality. With the IoT and hardware hacking culture we need to get into a similar position where anybody can build an automatic pet feeder rather than contracting someone to make it on their behalf. Maker culture is all about people empowering themselves to solve their own needs and fixing their own problems.
www.linuxformat.com
LXF: Community is an important aspect to an event: does OggCamp have a similar community to that of Maker Faire? AR: OggCamp stands out as a more accessible community; everyone was keen to share and wanted to work with you. The mix of stalls on offer neither detracts from the event nor from each other. Offering a great balance. LXF: Did you get chance to network? AR: Indeed, we had time to talk to Warwick University who are running classes with primary schools, where children can build fun projects, they are interested in using Codebug as their platform.
OggCamp 2015
MARTIN WIMPRESS: UBUNTU MATE Interview
Linux Format: Could you tell us a little more about yourself? Martin Wimpress: I’m at OggCamp today to talk about my Ubuntu Pi Flavour Maker.
LXF: So what’s Ubuntu Pi Flavour Maker? MW: It’s a set of tools that I have built to port all of the Ubuntu distributions to the Raspberry Pi 2. Right now we have Ubuntu Mate, Lubuntu, Xubuntu and Server working perfectly but Ubuntu Gnome, Kubuntu and Unity are still work in progress due to a lack of a 3D driver. But work is progressing and we should have this soon as the framework to build the images is already in place. LXF: That sounds like a very ambitious project, what is your end goal for Ubuntu Pi Flavour Maker? MW: It’s really a drive to improve the adoption of Ubuntu on the Raspberry Pi by providing the best desktop environment for the Pi. LXF: Raspbian is considered the official distribution so what are you doing to help users adopt Ubuntu?
ON GETTING FEEDBACK
LXF: How did you test Ubuntu Mate? MW: I gave out 24 SD cards running Ubuntu Mate 15.04 at a Raspberry Jam and then went back every couple of months to track their progress and collect feedback. I found that the kids were frustrated because there was no YouTube or Minecraft while the makers had no GPIO development tools and other users were frustrated because it was just a little bit too different to Raspbian. So with the latest version, 15.10 we have addressed these issues and now we have the likes of Minecraft, Sonic Pi and the latest version of Scratch that will work with the GPIO. For the makers, we’ve also included the RPi.GPIO Python library and we use the Raspbian kernel so that we match exactly with the Raspbian kernel.
“I found that the kids were frustrated because there was no YouTube or Minecraft.” MW: I have side-ported a number of the Raspberry Pi Foundation’s applications, tools and libraries to Ubuntu, and these are now embedded in the various flavours of Ubuntu for Rapberry Pi 2. So, eg, a user who has been working with Sonic Pi can easily carry on their work through Ubuntu. LXF: You are well known for your work on the Ubuntu Mate project: is your goal to have Mate or another flavour be the leading Raspberry Pi distribution? MW: I think that Ubuntu on the Pi will run in parallel to Raspbian, as Raspbian offers a fast, simple and lightweight distribution that gets stuff done especially for teaching. Ubuntu Mate was a continuation of a project that I started on the original Raspberry Pi, where I ported the Mate desktop to the ARM platform via Arch Linux. But the original Raspberry Pi wasn’t powerful enough to offer a desktop environment replacement, so when the new Pi 2 came along that finally gave me the power that I needed to finish the project.
LXF: So Ubuntu on the Pi is not an official project? MW: No, it’s a community build, but I have contacted the Lubuntu team and they will begin official support for the project soon. LXF: Ubuntu Mate looks more like a serious work environment, but with the recent release of Raspbian Jessie we’re seeing a more ‘grown up’ Raspbian with new features such as sudo-less GPIO access. MW: I’ve done something similar with Ubuntu Mate where I have created groups with GPIO, Video and SPI access, it then creates a set of
www.techradar.com/pro
udev rules for those groups. I set a hook for when creating a new user, the new user is added to the correct groups. LXF: By having multiple Raspberry Pi distributions, do you think we are risking a fragmented user base? MW: I think that it’s inevitable. If we think of the 800+ distributions listed on distrowatch.com, then anyone can make a Linux distribution. LXF: For those interested in developing for Mate, what is the best way to get involved? MW: I’d like to see the Ubuntu Pi project movement move forward so any help with packaging and porting from Raspbian to Ubuntu would be great to see. I would love to see Ubuntu Mate and Raspbian become similar but also provide access to the Ubuntu repositories. LXF: So here we are at OggCamp for another year, what has been the most interesting part of the weekend for you? MW: I really enjoyed Stuart Langridge’s talk on publishing podcasts. Being a podcaster myself, and having friends who are podcasters too, this talk generated lots of debate. It was clear, based on those discussions, that a number of audience members are going to look at the scripts that Stuart has created and are looking to create a general-purpose community podcasting tool. I also loved the lightning talks and the exhibitors who really provided lots of information about their particular projects. OggCamp is all about diversity and it is my highlight of the year.
January 2015 LXF206 47
OggCamp 2015
PICH & WILSON: ENTROWARE Linux Format: Thanks for taking the time to talk to us, please can you tell the readers who you are? Anthony Pich: Hello, I’m the co-founder of Entroware. Michael Wilson: I am also the co-founder.
Interview
LXF: Can you tell us more about Entroware? AP: Entroware was founded to offer a UK-based source of Linux laptops, desktops and servers. We specialise in Linux – primarily Ubuntu – as we saw a growing need for customers to choose the right Linux machine for their needs. We want to show what Linux has to offer and if there are pre-built packages for customers then Linux uptake is much more likely to propagate. We offer a range of desktops and laptops that cover the broad spectrum of needs, from portables running quad-core Intel Celerons to monstrous gaming rigs with GTX 980 graphics cards. But we also provide a bespoke service where customers can handpick their components and build their dream specification. LXF: So how old is the business and who are you competing against? MW: Just under 18 months old and we have reached a stable level of turnover. AP: Yeah, there are other providers of Linux machines, namely System76 who are based in the US. We thought that there should be something similar in the UK as there’s a need to provide devices with modern hardware to the UK Linux community. LXF: Your machines are designed with Ubuntu in mind, why is that? MW: Ubuntu seems to be the most userfriendly operating system and especially suits those new to Linux. We also support Ubuntu Mate, and for those users who feel that Unity is just a step too far. Mate provides a similar experience to Windows.
AP: Ubuntu has for a number of years seen to be the de-facto standard distribution. LXF: Would you ever consider supporting other distributions? MW: Unofficially, we do support other types of Linux distributions and we will work with the customer to tailor the best package. AP: But our main focus is on Ubuntu for the time being and that’s mainly due to logistics. We need to ensure that our machines are tested and pass the QA (Quality Assurance) process and right now we use Ubuntu and Ubuntu Mate which offer that level of assurance for us and the customer. LXF: So here we are at OggCamp and you are the headline sponsor. What was one the driving factors that supported your decision to support and come to OggCamp? AP: Mainly it was the great OggCamp community; they were our biggest reason to be here because we are also part of the greater Linux community which OggCamp promotes. LXF: So OggCamp is your chance to undertake outreach to your community? AP: Yeah, we need better brand awareness among the Linux and open source community. We’ve had quite a lot of interest from the OggCamp community and I think that we are on the right track. LXF: I see on your stall that you have a Steam Machine running Borderlands
48 LXF206 January 2016
www.linuxformat.com
2 and the Steam controller, which is no doubt helping your outreach? AP: Our Steam machine is a prototype; it’s really a big tease for what we have to come. Right now we can’t say too much, but our Steam Machine is proof that gaming on Linux is entirely viable. LXF: There are more games coming out for Linux, so will we see surge in Linux gamers? AP: Basically anything based on the Source engine library is Linux compatible, for example Metro Last Light, which is a graphically impressive first person shooter. MW: I think that we will see more gamers using Linux, especially after the official release of Steam OS and more OEMs shipping their own vision of a Steam console. It will force gamer makers to seriously consider Linux. LXF: A Unity Editor was recently released for Linux – do you think this will this help developers create cross platform games? MW: Unity is an appealing option for developers; it’s viable to export the game to Linux as it requires just a few clicks. LXF: Have you managed to catch any talks? AP: Sadly not many, I did catch the last half of the podcaster panel. But our focus this weekend has been to engage with the OggCamp community, because that is where we get feedback which helps us improve our products. The community have been great and given us lots of useful feedback. We have had great conversations with the other exhibitors, such as Ubuntu and Ragworm. It’s great to see so much is going on in the maker community. LXF: What’s your take away from OggCamp? MW: The open source community. These are the people we looked up to when we started. LXF
Helping you live better & work smarter
LIFEHACKER UK IS THE EXPERT GUIDE FOR ANYONE LOOKING TO GET THINGS DONE Thousands of tips to improve your home & workplace Get more from your smartphone, tablet & computer Be more efficient and increase your productivity
www.lifehacker.co.uk
twitter.com/lifehackeruk
facebook.com/lifehackeruk
Linux laptops
Buy a Linux laptop TUX4U
Want a laptop installed with Linux? Neil Mohr says good luck as he looks at what’s wrong with the laptop world.
S
o you want to buy a Linux laptop? Cue feature where we go and torture various retail outlets’ poor, unwitting members of staff about Linux. Oh how we’ll laugh. No, we’re not doing that, as it’s hardly fair and you always end up with one that does know
Dell offers a small but well-chosen selection of Ubuntu-equipped laptops.
50 LXF206 January 2016
about Linux and they’re just sad as there’s nothing they can do about the policy of a national chain. Sigh. Instead we’re going to be looking at what real options are out there and what are the other approaches you can take from selfinstalling to building a system in a truly openhardware way. Of course, there are solutions such as Chromebooks and Android devices, but many Linux users snub these options as, even though they might use Linux, they’re not true GNU/Linux distributions (distros) being rather bastardised Google versions, lacking the true software freedom traditional distros with the GNU element provide. This is yet another indicator as to how vital the GPL is in keeping systems free from proprietary lock down, despite them being open source. We’re certainly not going to entirely dismiss what have become goliaths in the consumer
www.linuxformat.com
market. As you’ll know Android is the most widely used OS in the world (perhaps ever) and Chromebooks are shaping up to be a real contender in the laptop market, carving out as big a slice as Apple commands. Our main interest though is with standard laptops. We’re not going to consider gaming models, as their integrated GPUs can cause issues. So we’d recommend avoiding anything with discrete AMD or Nvidia GPUs. There are specialist driver builds such as Bumblebee for Nvidia GPUs, but it just seems like making a rod for your own back. If you want to game, you should stick with a desktop unit. We’ll also take a peek at some of the interesting crowd-funded models that are redefining how laptops are made. While, of course, the evil secret is that some box shifters, etailers and specialist builders do offer native Linux laptops, so let’s see what we have in store…
Linux laptops
S
o perhaps we should kick off by asking the question: why can’t you buy a Linux laptop? As we’re about to see, you can, just not in a high street store. The horrible truth is retailers are scared of Linux. Before hipsters started snapping up Apple products, the truth is anything that doesn’t run Windows requires extra support and that means more money. Before a retailer is going to stock something that costs it money it’ll want someone to pay upfront to cover those costs. Over the years Microsoft has done this; it’s actually paid companies a lot of money to stock its product or push its products as the top choice. In the Linux world, at least in the desktop arena, there’s no single body capable of that type of marketing exercise. Even if manufacturers wanted to make Linux-powered device there was nowhere that would sell them. Again for manufacturers it’s a similar story creating a specific Linux device is going to incur additional support costs and potentially for very little return. But that doesn’t stop certain manufacturers offering Linux options. We’ve seen the Dell XPS
13 [see Reviews, p17, LXF198] and there’s the Dell Precision range consisting of the M2800, M3800 and M6800. None are really run-ofthe-mill laptops, and they are all aimed at the workstation market. The M2800 does start at around £1,000, but the other two go from £2,000 and upwards, and they’re fine machines for high-end work. There is good news in the shape of the new entry-level Dell Inspiron 3000 Ubuntu, which starts at £200 and offers Ubuntu as an option. Based on a Intel Celeron N3050 with 4GB of memory we suspect it’s a spin-off of its Chromebook range and could be a solid, if uninspiring, laptop. The big-name options don’t stop at just Dell. HP has recently (from around mid-2014) been experimenting with a small range of lowcost laptops, starting with the unimpressive but functional HP 255 G1 [see Reviews, p20, LXF188], which comes preinstalled with Ubuntu 12.04 LTS and had a clever recovery system. The latest generation HP 255 G3 has been joined by the HP 355 G3 and the HP 455 G3, going from £199 up to £300. While we’d hesitate to recommend the base machine, using the AMD A4-5000 APU, the HP 455 G3 with the A10-7300 and 8GB of memory, it actually competes well with the lowend Intel Core i5 4200U, both in processing speed and 3D gaming capabilities. In fact, it can play recent games like Alien Isolation at their lower-end settings, making for an impressive all-round laptop for the money.
You can’t directly buy a Linux Lenovo laptop, but Ubuntu and Lenovo work to ensure many are certified.
Linux as an option. Your next best bet is to go for a Linux laptop from an independent system builder. Our North American readers are quite well catered for here by companies such as System76.com, puri.sm and zareason.com, which all supply custom-built laptops running
“Your next best bet is to go for a Linux laptop from an indie system builder.”
Indie libre
HP has been testing the water with its Ubuntu laptops and has started to offer more like this well-equipped HP 455 G3.
Incredibly that’s the end of the tale for Linux for the box shifters. Lenovo has a part to play but we’ll come to them later, as it won’t ship you a laptop with Linux installed though a couple of its workstation-class models do have
Linux. In the UK there’s http://minifree.org that offers the interesting if aging Libreboot Lenovo X200. It’s certified by the FSF – used no less than by Richard Stallman himself – as being entirely open and free using the Libreboot UEFI and Trisquel distro. If you’re a die-hard software freedom fanatic, it’s one of the few options on the market that you can buy off the shelf that ticks all the boxes. But you do pay the price somewhat in terms of out and out performance, down to the aging yet competent
Chromebooks, they’re Linux right? We admit that we’ve become quite the fans of Google Chromebooks, [see Roundup, p24 LXF202] but we also perfectly understand if you don’t feel the same way. We’re also a little confused as to why we love them so much, but largely it’s that they just work, are generally lowcost and run Linux. It does also help that you can use Crouton to add real GNU/Linux via a chroot in the form of Ubuntu, but is it as good as a native installation? So Chromebooks run Linux at their heart and Google honours the open source licences, so it maintains the Chromium OS project. This means if you feel the need you’re able to ‘make’ your
own Chromebook with your custom hardware [see Tutorials, p70, LXF199]. Chromebooks have come a long way and are on target to capture 7% of the laptop market in 2016. They’re ideal if you want to do browser-based activities, writing, reading, but they’re capable of playing films, video and music. So ultimately versatile enough. The Crouton system [see Tutorials, p82, LXF204] maintained by Google engineers creates a chroot install of Ubuntu. It works well but don’t expect video acceleration on non-Intel models. A recent Crouton extension even allows Ubuntu to run windowed alongside Chrome OS. So if you want access to all your GNU/Linux tools
www.techradar.com/pro
they are there and a Chromebook compared to a low-cost generic laptop tend to win in build quality and design.
Want something that’s a bit more flexible? A Chromebook can offer ease of use and Linux power.
January 2016 LXF206 51
Linux laptops
(at the time it was awesome) Intel Core 2 Duo P8400 processor. Even here this isn’t open hardware, it’s just ticks the software freedom boxes. It’s not like you can go off and build your own Lenovo Libreboot X200, it’s not open hardware. (Note: we’ll look at open hardware options towards the end). The idea of bootstrapping your own PC from scratch is deeply complex, but is something we’re going to look at in a future issue. So stay tuned for that subject.
Install it yourself
At this point you might be thinking it’d be easier just to install Linux yourself – and you’d be right. We’re going to take a bit of time now to look at how you can pick a laptop off the shelf that you can be certain will run a GNU/ Linux distro with no hassles. To kick things off Ubuntu maintains a tested compatibility list of not just laptops but also servers and desktops at www.ubuntu. com/certification/desktop. This covers a range of manufacturers including Lenovo, Dell, HP and Asus with the list specifically supporting Ubuntu 12.04 LTS and 14.04 LTS.
Drilling down into the list you’ll find each model gives you a full rundown of components. BIOS and additional notes. It’s a solid start as just for Ubuntu 14.04 LTS there’s are over 160 laptops listed and more than 500 for 12.04 LTS. This is the ideal position to be in, knowing that each component of a laptop is supported by Linux. If you’re thinking of buying a laptop that’s not on the Ubuntu list, then the thing to do is check each component the laptop uses against the Ubuntu certified component list at www.ubuntu.com/certification/catalog it’s not an exhaustive list but is a start. You can take things further by scanning through http://linux-drivers.org while indivdual distros provide their own lists of supported hardware, eg Debian has a database (https://wiki.debian.org/ Hardware), as does OpenSUSE (https://en.opensuse.org/Hardware) and Linux Mint (http://community.linuxmint. com/hardware). More general advice is to choose an all Intel laptop. It seems harsh to AMD, but Intel has a sound track record of supporting Linux and releasing drivers as the hardware is released. Many key issues are related to issues with wireless card firmware and drivers, similarly but less common Ethernet drivers and discrete Nvidia and AMD GPU drivers. If you’re thinking of
The flagship Linux laptop from Dell is the Ubuntu-powered Dell XPS 13 Developer edition.
The Libreboot X200: Take a classic Lenovo X200 and install it with complete open software freedom.
buying any laptop ensure Linux supports these and avoid discrete GPUs, if possible. A final but important point is that some laptop manufacturers are locking down the UEFI BIOS, making it impossible to install any other operating system. It’s hard to know how widely this trend is spreading, but the advice is do not buy a laptop before checking you’re able to boot off another device. Even if Secure Boot can’t be turned off, you’d still be able to install Ubuntu, OpenSUSE and Fedora as these have keys thanks to our benevolent overlord Microsoft. Mumble, mumble… There is a caveat here that these keys don’t have to be installed in the UEFI – they usually are – if not they can be installed from Windows.
Open hardware
People are becoming more demanding when it comes to closed and proprietary software and hardware. The explosion of embedded systems in the form of smartphones and tablets, has highlighted how locked down hardware has become, via closed-source bootloaders and unseen firmware. Desktop processors have also become so complex that they run their own firmware that can be updated and it’s the closed nature of this that rings privacy alarms for security types.
Android isn’t an OS! Another Linux option is to go down the Android route. A few brave companies have attempted to push Android laptops and Android hybrid devices, such as the Dell Venue 10 7000 [see Reviews, p17, LXF202]. An alternative to this is to get an Android tablet and pick up one of the many Bluetooth keyboard case options or alternatively just buy a Bluetooth keyboard and mouse, as Android via the handy Linux kernel does support pointing devices and full keyboards. The limitation as with Chromebooks is that many won’t see Android as a full GNU/Linux distro and people do have a reasonable point. We’ve used a Nexus 5 to write on with a separate
52 LXF206 January 2016
Bluetooth keyboard. It works but multitasking is highly limited, as you might expect, in terms of when you Alt+Tab between apps you’ll find half the time they’ve closed and will restart from scratch. Running anything more than Gmail and Chrome tends to be the limit. Personally, we also don’t think Android apps work very well on larger screens and we’d prefer the Chrome OS approach of using the web browser, as browser apps are designed for larger screens. However, it does largely come down to what you plan to use your device for, Android tablets and phones are fine for low-resource computingbutrunmorethanacoupleofappsand they can struggle.
www.linuxformat.com
Despite a few good attempts, Android hybrid laptops tend to be disappointing due to poor multitasking.
Linux laptops
Linux workstations When the business world comes knocking companies pay attention, largely as there’s cash to be had. Because of this if you’re on the lookout for a high-end workstation you’re going to have a much better time of it. There are segments of the business world that do demand Linux-supported devices – eg NASA and the petrochemical industry – and happen to have large pots of cash. This also means those workstations aren’t cheap. We’ve mentioned the Dell Precision range and we’ve reviewed the HP Zbook 15u G2 [see Reviews, p17, LXF196] and plan to review the new Lenovo ThinkPad P70. These are
Who knows what’s happening inside the deepest, darkest corners of your processor? This has lead to calls for either companies to start open sourcing their most secret bootloaders and firmwares or to release hardware that doesn’t carry such locked areas. As a reaction to this, open hardware designs are starting to appear. Whereas once creating your own silicon was hideously expense that only a few of the biggest
heavyweight laptops and in many ways are more like mobile desktops. But if you’re interested in doing heavy number-crunching on the move, these are more than capable. We should also mention System76.com (a US builder) if you are stateside or can live without warranty support, its Serval WS and Orynx Pro, for instance, are powerful options. The models mentioned offer Full HD or better displays, desktop-level processors – think Core i7 and Xeon models – 8GB+ of memory and dedicated graphics, such as AMD FireGL or Nvidia Quadro if you need it for visualisation plus OpenCL/CUDA GPGPU power.
corporations in the world could foot, now Joe and Jill Blogs can have a go. What has this got to do with buying a Linux laptop? Just as Trisquel powers the Libreboot X200, so Linux is at the heart of these new open hardware projects. The highest profile of which is the Novena project, which successfully – at this point anyhow – launched on Crowd Supply, raising over $700,000 over an original target of $250,000. The system is based around the Freescale i.MX6 ARM system-on-a-chip. Unlike many other SoC
“As a reaction to this, open hardware designs are starting to appear.”
Novena: the cutting-edge in open hardware development but it’s expensive, slow and pretty amateurish looking.
www.techradar.com/pro
Linux uses are well catered to at the high-end of the market, such as the Lenovo P50 and P70 workstations.
it’s about as open as things will get. The firmware can’t be updated and is readable from the chip. The specifications of the processor are also available without needing a nondisclosure agreement. The main fly in the soup for the Novena project is the lack of hardware accelerated video. The Freescale SoC does pack a GPU, but as is so often the case it’s a closed-source driver. To give credit to Novena it stuck by its guns and went down the software rendering path. The hope and aim is to ultimately reverse engineer the GPU, 2D acceleration is mostly in place but the ultimate prize is 3D OpenGL support. It’s these sort of compromises that makes it unlikely you’d want to opt for the Novena, it’s distinctly for die-hard open hardware fans. While it does offer a quad-core processor running at 1.2GHz, using the Cortex A9 ARM architecture restricts it to 32-bits, so it’s not going to set the world alight in terms of speed, but it is very open. More importantly it points the way for smaller SoC manufacturers that there is a demand for open hardware. While Intel may never open up its x86, there is enough competition in the ARM market that this could happen. It’s just a matter of time. So at this point you should be fully armed with everything you need to know to successfully buy a solid Linux laptop: from off the shelf models and Ubuntu certified offerings to self-install and open-hardware options. Today Linux compatibility is excellent and as long as you take a little care not to pick up a system with a locked UEFI you can expect a happy laptop life. LXF
January 2016 LXF206 53
Mr Brown’s Administeria
Mr Brown’s Jolyon Brown
When not consulting on Linux/DevOps, Jolyon spends his time bootstrapping a startup. His biggest ambition is to find a reason to use Emacs.
Administeria Esoteric system administration goodness from the impenetrable bowels of the server room.
Investigatory Powers
H
ow times change. Previously I might have been found impatiently refreshing web pages for the launch of festival or gig tickets, but recently I was hitting the reload button waiting for the release of a government draft policy paper. I was eager to read the Draft Investigatory Powers Bill (which can be found at http://bit.ly/DraftIPBill). This was published with quite a lot of publicity (and, it has to be said, an unusual amount of pre-briefing to the press). The early twenties version of me would be horrified! This is a large document as it turns out. Its 299 pages in total, accompanied by another 26 separate documents, which are impossible to summarise in the 300 words allowed for the column! There’s a lot of ongoing analysis available on the internet, as you might expect, and the bill is expected to become law at the end of next year after passing through its various phases (committees and such like). There is an opportunity for us all to have our say on this legislation as it passes through Parliament, which I personally think is worth doing, regardless of what your political persuasion might be. My particular interest was initially the burden any such investigatory law might place on service providers and to see what, if any, of the pre-bill discussion about encryption might have made it into the published document. However, the sections on ‘bulk equipment interference’ (which is basically legalised hacking) and the numerous gagging orders contained in the bill are very interesting. As sysadmins, we are tasked with ensuring systems are secure and are immune to attack. There is much in this legislation that might conceivably affect us in the future (eg situations may arise where speaking to anyone about them will land you in prison). It’s worth being aware of this bill’s progress so I urge you to read about it.
[email protected].
54 LXF206 January 2016
Docker 1.9 and Swarm 1.0 released
The container bandwagon rolls on with a production ready Swarm and multi-host networking.
A
head of DockerCon EU, which is held in Barcelona in November, Docker released some crowd-pleasing updates to its product, which continues to be the focus of much interest in the Infrastructure/DevOps arena. Version 1.9 of the container technology included a production release of its multi-host networking, which was previewed in the previous release, and allows for virtual networks to be created that span multiple hosts. This allows a much easier way for a user to have complete control over a network topology and which containers are allowed to speak to each other. This is billed as ‘softwaredefined networking’ for containers and also allows for the underpinning VXLAN driver to be swapped out to fit the particular needs of an individual site.
At DockerCon EU, Swarm was shown scaling up to 50,000 containers - which is almost as many as Jolyon has running Minecraft.
www.linuxformat.com
There was also news of a completely redesigned volume system for storing persistent application data which allows for the use of plugins – a big improvement over previous solutions – such as Ceph. As well as this, Docker also announced version 1.0 of its Swarm product which provides native clustering for Docker Engine. This is similar to other projects (such as eg Mesos and Kubernetes) but it has the advantage of using the Docker API. This potentially makes it easier for developers to scale up their applications by using the same consistent calls from the desktop up into large cloud provided environment hosting Swarm clusters. Docker has released details of it running with 30,000 containers across 1,000 hosts, but if more powerful back-end solutions are needed they can be swapped in (for very large scale production deployments). Elsewhere, OpenStack delivered yet another release (its twelfth, codenamed Liberty) which included greater integration with Docker and Kubernetes among many other features (in what seems to be an everexpanding family of projects). Finally, we here at LXF were sad to hear of the passing of Telsa Gywnne in November. Telsa was a well-known contributor to many open source communities and sat for a while on the Gnome Foundation Board of Directors. Our thoughts are with her family.
Mr Brown’s Administeria
ELK Stack Stop SSHing into your systems and start getting control of your logs with an ELK (Elasticsearch, Logstash, Kibana) cluster.
W
hen it comes to a project that involves creating new infrastructure, I like to consider the 3am scenario. That’s when the person on-call (which might be me) gets woken up by an operations team or automated alert and is informed that an issue needs their attention. Quite often these conversations are brief and the person receiving it most likely didn’t take it in at the first time of asking anyway. The question is: how quickly can that person understand what the call relates to, diagnose whatever issue is going on and then either look for a fix or make a call on the next action to take? Or to put it another way, how soon can I get back into bed? The first part of the question can be mitigated by putting some thought into monitoring and limiting what can actually cause a call out. In my opinion, a system should only have the audacity to wake me from my beauty sleep – and believe me, I need it – for something really urgent and actionable. If I get woken for something that could be put off until the morning I’ll be a) unnecessarily tired the next day and b) very grumpy. I did on-call for many years (and still do, albeit it to a much lesser extent) and the heroics you might imagine you’re capable of as a singleton in your early twenties or thereabouts don’t seem quite as appealing when you’ve still got to get up and get the kids to school in your thirties or older. As an aside, if you’re working in a culture where your on-call rota guarantees the person on point a terrible night/week/month with a lot of interrupted sleep – stop it. Stop it now. This is unsustainable and suggests you are either monitoring things that really don’t need to result in calls or that your infrastructure/application is so bad it needs to be put into intensive care. Get the whole team to stop and examine the list of call outs (if you’re not maintaining a list, start one). Identify what causes the most headaches, examine the underlying issue and deal with it. Rinse and repeat with the second on the list and so on. Burn out from consistently missing sleep is no laughing matter. To go back to the scenario where a sysadmin is sat in their pyjamas cursing the development team, hosting company or ISP (essentially who ever is ultimately responsible for them having to crawl out of bed) it may well be obvious from the alert what needs to be done: a process death might have
brought down a service; a filesystem might be about to fill up etc. But for anything non-trivial – and you shouldn’t be getting called for non-trivial stuff, you need to automate recovery, build redundancy into your service – it’s likely that some logs are going to have to be looked at. The information I need to interrogate might be operating system logs, or something generated by an application; but for anything more than the most basic service these logs are going to be generated in different places. Now, the last thing I want to do at 3am is manually SSH into a bunch of different Linux instances and start running less and grep commands. Depending on the type of infrastructure involved, I might be trying to track down errors across several web servers. It might not be clear which one or group is having issues; I might need to cross-reference logs here with logs from a middle-tier application service. Even worse, with the trend towards micro-services architectures, I might be contending with dozens of systems or potentially hundreds of containers! Back in the days of monolithic and n-tier architectures, it was common (and still is) as well as being good security practice to have a central ‘syslog’ server acting as a target for client systems to dump logs onto (probably using rsyslog and UDP). These days, implementing this kind of setup is the bare minimum I would do, if only to secure copies of live logs for audit purposes. There are number of options for dumping logs to ‘write once’ destinations, ranging from: cheap and cheerful to enterprise class (re: expensive) log aggregators. At least with this kind of arrangement I can just look for issues in one place – but this still means having to manually work my way through logs. On one system I worked on a few years back that followed this model, the team gradually built up sets of commands and scripts to try and quickly pull information out from the amalgamated files, but all too often there was nothing for it but to trawl through the output of several egrep and awk commands piped together.
Logstash’s logo wouldn’t look out of place on Catchphrase.
Can I pay someone to do this all for me? Elastic are the company behind the Elk stack (previously known as Elasticsearch until March this year). As per a lot of companies founded on open source, they offer a variety of paid support contracts for their software stack, as well as offering other ‘enterprise’ type arrangements for some additional software. The big selling point of a contract is, of course, that they are the people who actually write most of the code for these products. At the highest support level (price on application) they will provide emergency patches for bug fixes. Having taken a fair amount of funding, the
company has expanded into offering cloud based hosting for an ELK setup – a ‘managed’ solution in other words. The company’s commercial offerings include Shield (which provides encryption, role-based access control IP filtering and auditing for an ELK stack), Watcher (which enables ELK to provide, eg monitoring capabilities based on anomalies in the data) and Marvel (extra tools for investigating the status of an Elasticsearch deployment, auditing capability and optimisation/fine-tuning). There may be a super hero-based naming scheme going on there,
www.techradar.com/pro
come to think of it. You can take a look at the company’s open source projects (of which there are many) at https://github.com/elastic. However, as with any open source systems, there are alternatives for ELK stack support. Logz.io are one such company offering specialist support and hosting, while Amazon has its own ‘Elasticsearch Service’ which can stand up an Elasticsearch cluster direct from the AWS console. This brings the usual AWS type of product benefits – automatically replacing failed nodes, easily scaling etc and aims to replace existing on-premise ELK setups.
January 2016 LXF206 55
Mr Brown’s Administeria I know what you’re thinking - he’s going to suggest a better way of working here, isn’t he? Correct! Suffer not sysadmins. Leave the Dark Ages behind and examine your logs in a modern fashion! In this issue and next I’m going to look at what’s commonly known as the ELK stack (which stands for Elasticsearch, Logstash and Kibana). These three components together provide a really powerful tool for analysing the data produced from all kinds of systems. First of all, let’s look at what each element of this stack actually brings to the table. Logstash collects, processes and forwards logs (and other kinds of data). The project boasts being able to “process any data, from any source.” To back this up, there are over 200 integrations available for the software, allowing Logstash to hook into all kinds of output sources. Logstash is written in Java, so needs the JRE (version 7 or later) installed to work. As well as doing basic log processing, Logstash can manipulate data as it transfers it to its destination by what are known as ‘pipelines’. These essentially consist of input, filter and output plugins that split and transform the data into a form that can be stored in Elasticsearch (or for other uses – but we’re only really concerned about the ELK stack here). Elasticsearch is the ultimate drop-off point for our processed data in this setup. It too is written in Java and is, in fact, based on the popular (and venerable) Java search and indexing engine, Lucene. It’s billed as a highly scalable, fulltext search and analytics engine. Capable of many things, in this scenario I just want to use it to analyse and mine logs for useful information. Kibana is written in JavaScript and provides the front-end for our log analysis powerhouse. It’s open source, as are the other elements, and can be used to search, view and
We’ll need to briefly configure Kibana to start seeing all our data
ultimately interact with the data stored in Elasticsearch. It too can be very powerful but runs straight from the browser. It’s here we’ll be spending most of our time once we have the data we need.
Let’s get logging
Before getting some initial set up done, I wanted to quickly look at the work needed on the clients (in this case our many Linux systems). I’ll admit that previously I’ve had misgivings about installing Java specifically just to run a log-forwarding agent (albeit it Logstash is written in JRuby), especially in development shops that didn’t want to touch Java with a bargepole. Luckily, the agent element of Logstash has now evolved into a project called Beats which are lightweight processes written in Go. There are a number of different Beats available. I’m specifically looking at Filebeat which replaces the old logstash-forwarder application. In actual fact, these agents can dump data directly into Elasticsearch if needed (but that means missing out on some of the cool transformational stuff Logstash can do). Now onto the main event: getting everything up and running. Elastic maintain its own package repositories with the usual distro selection available. I’ll stick to my usual Ubuntu 14.04 setup, which I’m sure is getting boring for many of you! I’m going to install the ELK stack on one VM to begin with and then have some clients send it some logs before looking at some of the ‘fun’ stuff in the next issue. Take a look at http://bit.ly/ElasticReposSetup which details public keys involved etc. The steps can be summarised as follows: $ wget -qO - https://packages.elastic.co/GPG-KEYelasticsearch | sudo apt-key add $ echo “deb http://packages.elastic.co/elasticsearch/2.x/ debian stable main” | sudo tee -a /etc/apt/sources.list.d/ elasticsearch-2.x.list $ sudo apt-get update Note: This specifically avoids using add-apt-repository as there is no deb-src repo available. Here I’m using 2.x as the version, following Elastic’s recommendation. I won’t cover installing Java (a prerequisite for installing Elasticsearch itself) which comes with the usual ‘official Oracle packages’ vs OpenJDK dilemma (it should be fine, according to Elastic and was what I used here). Installing Elasticsearch is as simple as running $ sudo aptget install elasticsearch .This will download about 28MB. Ubuntu will install it as a service as expected if installed via apt which can be kicked off with $ sudo service start elasticsearch . Jumping over to /var/log/elasticsearch and taking a look at elasticsearch.log should show everything up and running (heavily edited output here)
Alternatives to ELK I’ve probably mentioned it before, but one of the things I really like about open source is that there’s often an alternative choice available when it comes to choosing software. It allows ideas to evolve and things to improve over time (hopefully, anyway). Parts of the ELK stack can be replaced for example. Fluentd is an alternative to Logstash, written in Ruby (and handily has drivers available for Docker) which has a large installed user base (I work with some clients who call this arrangement a ‘FEK’ stack).
56 LXF206 January 2016
Graylog is an open source based company which uses Elasticsearch (and MongoDB) as part of its setup as well as its own alternatives for the ‘LK’ portions of the stack. Grafana can act as an alternative to Kibana (apparently – I haven’t tried it yet), which I’ve seen criticised online as being too heavyweight (Kibana recently went up to a new version, which often causes ructions in the user base of course). The 200 pound gorilla in this space though is Splunk, which isn’t open source - but does have
www.linuxformat.com
free tiers available if you want to run small setups (the product is excellent in my experience - but extremely expensive). Other commercial SaaS alternatives are SumoLogic and Loggly. Be aware though that in some environments, having logs sent to a third party might not be a possibility (or might require a lot of safeguards are in place). If you’re evaluating these kinds of infrastructure, double-check to make sure you won’t run aground on any industry standards (such as PCI).
Mr Brown’s Administeria [INFO ][node ] [Screaming Mimi] starting ... [INFO ][transport ] [Screaming Mimi] publish_ address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300} [INFO ][discovery ] [Screaming Mimi] elasticsearch/ A2FH81ZjTGaSb-HFYioXkQ [INFO ][cluster.service ] [Screaming Mimi] new_master {Screaming Mimi}{A2FH81ZjTGaSb-HFYioXkQ}{127.0.0.1} {127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received) [INFO ][http ] [Screaming Mimi] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200} [INFO ][node ] [Screaming Mimi] started [INFO ][gateway ] [Screaming Mimi] recovered [0] indices into cluster_state This shows my new Elasticsearch cluster (of one node, which promptly elects itself as the master) as being up and running. Note the random name allocated to it. I can install Logstash at this point as well (which weighs in at a rather large 78MB). Again, I need to add its repos to /etc/apt/sources.list: $ echo “deb http://packages.elastic.co/logstash/2.1/debian stable main” | sudo tee -a /etc/apt/sources.list $ sudo apt-get update $ sudo apt-get install logstash
Copa Kibana
For Kibana, the situation unfortunately seems to involve downloading a file directly. The latest version (4.3) doesn’t seem to be available in repos (4.1 was, but this suggests the repos isn’t being maintained for Kibana at least – as Elastic says 4.3 is the version compatible with Elasticsearch 2.x). I’d suggest setting up a dedicated kibana user and group to own the software, and installing it under /opt/kibana; but you may have your own preference. I installed the 64-bit version of Kibana via: $ sudo adduser kibana $ sudo mkdir /opt/kibana $ sudo chown kibana:kibana /opt/kibana $ sudo su - kibana $ cd /opt/kibana $ wget https://download.elastic.co/kibana/kibana/kibana4.3.0-linux-x64.tar.gz $ tar -zxvf kibana-4.3.0-linux-x64.tar.gz As the kibana user I can now head into the newly created directory structure kibana-4.3.0-linux-x64/bin and start it up. Kibana should start up with no issues and issue some responses as per the (edited) example below: $ ./kibana log [13:20:47.173] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready log [13:20:47.210] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [13:20:47.323] [info][listening] Server running at http://0.0.0.0:5601 log [13:20:52.394] [info][status][plugin:elasticsearch] Status changed from yellow to yellow - No existing Kibana index found log [13:20:55.949] [info][status][plugin:elasticsearch] Status changed from yellow to green - Kibana index ready Kibana connects to an Elasticsearch instance running on localhost by default which is fine for my tests here. I can connect to this (empty) Kibana installation by pointing a
browser at my VM on port 5601. To stop Kibana running for now, I just hit Ctrl+c. An easy test for our new ELK cluster is to consume logs from its local host. I quickly need to install Filebeat (using my own user and not Kibana): $ curl -L -O https://download.elastic.co/beats/filebeat/ filebeat_1.0.0_amd64.deb $ sudo dpkg -i filebeat_1.0.0_amd64.deb I can now take a look at /etc/filebeat/filebeat.yml. There are a couple of sections to note at this point: “paths” defines which log files should be handled by Filebeat. To enable Logstash to be included I need to uncomment the line #hosts: ["localhost:5044”] in the logstash section. I also want to add a line enabled: false at this point in the elasticsearch section of the file. I need to install a plugin for Logstash to handle input from Filebeat. A quick: $ sudo /opt/logstash/bin/plugin install logstash-input-beats takes care of that. I also need to configure Logstash by creating a file /etc/logstash/conf.d/config.json and putting the following lines into it: input { beats { port => 5044 } }
Guess which idiot (me) forgot to take a screenshot of an actual running installation!
output { elasticsearch { hosts => “localhost:9200” sniffing => true manage_template => false index => “%{[@metadata][beat]}-%{+YYYY.MM.dd}” document_type => “%{[@metadata][type]}” } } Now I can start everything up: $ sudo service logstash start $ sudo service filebeat start Switching back to my kibana user I can start Kibana back up as per the steps above and reconnect with my browser. In the Index name or pattern field I can replace the logstash-* default simply with * . This will cause a green ‘Create’ button to appear. Clicking on that and then on the ‘Discover’ option at the top now shows that my local system has started having its log saved in Elasticsearch. Next month we’ll start looking at how to interrogate this data. LXF
www.techradar.com/pro
January 2016 LXF206 57
YOUR PERSONAL GUIDE TO THE UNIVERSE
OUT NOW!
DELIVERED DIRECT TO YOUR DOOR
2UGHURQOLQHDWwww.myfavouritemagazines.co.uk RU¾QGXVLQ\RXUQHDUHVWVXSHUPDUNHWQHZVDJHQWRUERRNVWRUH
KDE Plasma 5
Plasma 5 Jonni Bidwell takes us on a tour of KDE Plasma 5, one of the finest desktops out there.
DE 4’s January 2008 release was met, as is traditional, with a barrage of criticism. Initial concerns focussed on instability and a lack of polish, and moved on to the over-configurability of the manifold ‘plasmoids’ and finally settled on the burden that it placed on system resources. The end product was radically different to KDE 3.5, which jarred long-time users. But this is the price of getting contemporary and KDE 4 has undoubtedly brought the desktop environment out of the Windows XP era. Transitioning to the Qt4 toolkit made for slicker-looking applications, and the switch to the Plasma framework allowed for a consistent desktop informed by modern
design elements. Yet for all the changes, KDE 4 largely stuck with the traditional desktop metaphor, with menus and application pagers and system tray icons. Compare this with, say, Gnome 3, whose minimal stylings and stark departure from this paradigm attracted (and
But now, KDE is no more, which isn’t to say there isn’t a new exciting desktop to follow KDE 4 [otherwise what the heck are you writing about? – Ed], there is, it just isn’t called KDE 5. You can find out what it’s called by reading the box (see p60, TL;DR it’s KDE Plasma 5). But what of this new arrival? Well, it’s nothing if not impressive. It’s a ground-up rewrite, but users who have spent the last five years working with KDE (SC) 4 will have no problem transitioning. Indeed, users coming from any desktop environment (even those of certain proprietary OSes) will find it pretty and intuitive. Furthermore, because everything has been arranged with simplicity in mind, it’s at least as accessible to the complete beginners as any of the competition (this includes you, fruitloops).
“Users coming from any desktop environment will find it pretty and intuitive.” still attracts) plenty of criticism. KDE users could sit back smugly while users of the rival DE struggled to get their heads around the new regimen. Indeed, one could reasonably argue that dissatisfaction with Gnome 3 was directly responsible for the Unity, Cinnamon and Mate desktops coming into being.
www.techradar.com/pro
January 2016 LXF206 59
KDE Plasma 5 Despite still being a ‘traditional’ desktop environment, Plasma 5 looks nothing if not contemporary. The new Breeze theme brings a flat, clean material design with much of the old clutter from Plasma 4 consigned to oblivion. KDE 4 often stood accused of being a clunky, bloated memory hog, against its successor, however, this criticism doesn’t really stand. Yes, it does make extensive use of compositing to provide fading and transparency effects and yes, all the eye candy and features mean that Plasma’s memory footprint is non-trivial (about 700MB on a system with 6GB of RAM), but it remains slick and responsive at all times. In particular, there is very little background CPU activity when the desktop is idle, or even when you start dragging windows around like a mad thing. This was on an aging Core 2 Duo CPU circa 2006, so cutting edge hardware isn’t required. Plasma’s user interface is built using the QtQuick 2 framework. All the UI elements are drawn on an OpenGL(ES) scenegraph which ensures that most (if not all) of the rendering effort is handled by the GPU. Some effects are enabled by default: windows stretch while they are being maximised, they go semitransparent during moving and resizing and
switching desktops transpires with a satisfying glide transition. Some will want to disable these effects, but for many they are actually useful – eg it’s helpful to see what’s underneath that window you’re currently repositioning. The less useful but occasionally satisfying wobbly window effect is also there, for those that care for such frippery.
Multiple desktops
Everyone loves virtual desktops, but Plasma 5 takes this one step further with the introduction of Activities. Designating a new Activity (eg ‘work’, or ‘social media addiction') allows you to configure which applications are open and where. Privacy settings can be tweaked on a per-activity basis, so you could create an ‘amnesiac’ Activity that doesn’t remember which documents you open, or only does so for certain applications. Shortcut keys can be set up so that Activity switching is only a keypress away, (great for when you’re at work and the execs stroll in unannounced). Activities also provide a clumsy workaround for those who want a different background on each virtual desktop. Apparently there are technical reasons for this restriction, and no doubt someone will come up with a better
Muon knows all about Gnome apps, but it also doesn’t try and hide other packages from you.
The Plasma NetworkManager applet has been updated, it works much better with OpenVPN and supports enterprise WPA(2) set ups. It also provides ever-so-nice graphs.
solution soon, but it’s disappointing given how much wizardry is apparent elsewhere. Another thing that may annoy Plasma 5 newbies is the default Start menu, which is called an application launcher. Points of contention include the fact that it’s unnecessarily large (it’s wide because there are five tabs arranged horizontally), that there’s an irksome effect on the helpful ‘Type to search’ prompt (there is no search box until you do so) which scrolls your username and distribution (suggesting that you are likely to forget them) and the fact that you hover over the lower tabs to activate them, but then opening an application category requires you to muster all your energy and click. However, Plasma is highly configurable and if you dig around you’ll find that there are two other application launchers you can choose from – a classically themed menu, or a fullscreen, Unity/Gnomestyle dashboard. If you obey the type to search edict, then within a few keystrokes you’ll be presented with a list of relevant applications, recent documents or web pages. Thanks to Baloo (which replaces the ambitious Nepomuk semantic search) all the indexing required to do this voodoo is done behind the scenes with a minimum of interference. Many people are now in the habit of using this kind of live search for navigating the desktop. For some, the idea of having to lug a heavy mouse cursor all the way down to the lower-left corner and click and gesture to start a program is an arduous chore. Fortunately there is also Krunner, which you can access at any time by pressing Alt+Space.
There’s no such thing as KDE 5 Use of the KDE tricronym to refer to the desktop environment began to be phased out after version 4.5, which was released with another two letters, becoming KDE SC (Software Compilation). Nowadays, the KDE monicker tends to refer to the whole community centred around the desktop. While the underlying Qt libraries have always been separate from the desktop environment that they power, KDE 4 gave rise to a number of auxiliary libraries (collectively lumped together and referred to as kdelibs), some of which were
60 LXF206 January 2016
part of the desktop itself, and some of which were only required for certain applications. In the latest incarnation of the desktop, these libraries have been updated and rearranged: some of their functionality is now provided by Qt components, some have been annexed into a collection referred to as KDE Frameworks (Kf5) and the rest are bundled with any applications that require them. The applications themselves constitute a suite called KDE Applications and the new desktop environment is known as KDE Plasma 5.
www.linuxformat.com
Decoupling the applications from the desktop enviornment is a bold move, but certainly is mutually beneficial: Plasma users are free to pick and choose which applications they want to install, and users of other desktops can install a KDE application without bringing most of the desktop with it. Likewise the compartmentalisation of Frameworks and Plasma allows LXQt to be what it is: a lightweight Qt5-based desktop that relies on a couple of Kf5 libraries whilst being entirely independent of the Plasma desktop.
KDE Plasma 5
Fear not, you can still plaster your desktop with rotatable widgets to improve productivity.
Tomahawk is a feature-packed, and optionally Qt5-powered, music player that lets you listen to sweet music from the antipodes. Coincidentally, Tomo-haka means ‘war dance’ in Maori.
This will open a minimal run dialog in the top centre, which you can use in the same way as the live search from the application launcher. Gripes aside, it’s hard to overstate just how slick Plasma 5 is; neither our screenshots nor the words of our underpaid writer can do it justice. One must give credit to the efforts KDE Visual Design Group here, who have achieved all this through an entirely open and democratic process. In particular the Breeze icon theme is a tour de force, consisting of not less than 4,780 icons which all but guarantee that your application toolbars and launchers will look consistently beautiful. Breeze uses monochrome icons for actions and context menus, whereas applications and folders are depicted colourfully. The default desktop configuration has been carefully designed to be as usable and inoffensive as possible. Criticism of KDE 4’s overconfigurability (handles on everything) have been heeded without overly locking things down. The hamburger menus on the taskbar and desktop can be easily hidden once you’ve added whatever widgets you desire, and there are plenty of widgets to choose from, including Vista-inspired analogue clocks and post-it notes. Most settings have found their way into the System Settings
LibreOffice doesn’t really fit in with the rest of the Breeze theme, stylee toolbar buttons notwithstanding.
applet. This is a welcome change, most everyone who used KDE 4 experienced the frustration of remembering vividly that some setting exists somewhere, but discovering exactly where requires one to carry out an exhaustive survey of the nooks, crannys and context menus of the entire desktop. There are still some stray options, eg the Desktop Settings panel is only available by right-clicking the desktop, and it’s also the only place you can turn the Desktop Toolbox back on. Even
also allows each project to develop more or less independently, so KDE Frameworks have opted for a faster-paced monthly cycle, whereas Applications and Plasma have opted for a more conservative 3-month cycle. Allowing these groups to develop at their own pace has had the slightly quirky sideeffect that, while Plasma will have reached version 5.5 by the time you read this, and Frameworks version 5.17, a number of core applications are still in the process of being ported to Qt5/Kframeworks 5. Be that as it may, you can still try out Plasma (sans shiny Qt5 versions of Konqueror and Okular) without touching your current install by using an appropriate live CD. For example, Fedora 23, Ubuntu 15.10 (both on the LXFDVD) and OpenSUSE Tumbleweed all ship a Plasma 5 flavour. Alternatively, so long as you don’t have KDE 4 installed then most distributions (distros) allow you to add some repositories (repos) to get the goodness. Of course, distros closer to the cutting edge, such as Arch and Fedora, include Plasma 5 as standard, and pre-release versions of Kf5-powered applications can be got from the AUR or copr repos, though they should not be considered stable. You can check the porting status of the whole Applications family at http://developer. kde.org/~cfeck/portingstatus.html. and Applications 15.12 is scheduled for release in mid-December 2015, though some of its constituents will still depend on the old kdelibs stack. Frameworks 5 purists will want to cherry pick their applications accordingly. The venerable, identity-challenged (is it a file manager? Is it a web browser?) Konqueror still relies on the older libraries, but the newer Dolphin file manager doesn’t. It’s interesting to note that the KDM display manager has been nixed. Perhaps a desktop doesn’t include the gateway by which it must be entered, or maybe the team just have plenty of other things to worry about. At any rate, there’s plenty of alternative display managers, the one KDE recommends is Simple Desktop Display Manager (SDDM), which uses the Qt5
“Gripes aside, it’s hard to overstate just how slick Plasma 5 is.” within the System Settings applet, some options are deeply interred behind three tiers of categorisation. Fortunately most of these follow a reasonable heirarchy, so you’ll be spared the labyrinthine wanders of the ol’ days.
The power of the trinity
By demarcating strict boundaries between desktop, libraries and applications the KDE team has introduced a new way of looking at where the desktop ends and other components begin. Among the KDE Frameworks 5 collection, we find Baloo (a new stack for searching, indexing and gathering metadata), Solid (a hardware integration and discovery framework) and KDED (a daemon for providing system-level services). Plasma 5 consists of the Kwin window manager, the Breeze theme, the system settings application, application launchers and the like. KDE Applications include the Dolphin file manager, the Kontact PIM suite and Kstars, the celestial mapping program. The separation of the trinity
www.techradar.com/pro
January 2016 LXF206 61
KDE Plasma 5 toolkit and can even use Plasma 5’s Breeze theme. Of course, one could equally well use Gnome’s GDM, or LightDM (used by Ubuntu), or even no display manager at all (fiddle with .xinitrc and use startx ). After years of mockery Ubuntu has finally abandoned its titular Software Center application and adopted instead the more functional Gnome Software. KDE used to have a similar tool called Apper, but that too has been abandoned in favour of Plasma 5’s Muon. All of these tools work (or worked) through the PackageKit framework, which abstracts away the specifics of the underlying package manager, making for a completely distroagnostic GUI for simple package management. Muon is two applications: Muon Discover, having a store-front feel, and Muon Updater, a simple tool that lives in the system tray and tells you when updates are available for currently installed packages. Muon works with Appstream data and so users can discover applications rather than packages, which can be a harder concept to grasp. Muon isn’t trying to step on the toes of your noble package manager, this will still work just fine and advanced transactions will still require it to be used directly. The Appstream effort merely allows for updates to be done from the desktop, which is a reasonable thing in a modern desktop environment.
Enter Wayland
Plasma 5.4 introduced a technology preview of Wayland, the next generation windowing library which will, one day, replace the venerable X.org display server. At the moment this just allows desktop users to fire up Weston (the reference compositor for Wayland) inside an X11 window and run supported KF5 applications with the -platform wayland argument. It only works with drivers supporting KMS (so not the proprietary ones) and we’re still a long time away from burying X.org. Most of the Wayland effort within the KDE camp is directed by the needs of Plasma Mobile, which you can now run on a Nexus 5 smartphone if you’re feeling particularly brave. As with all modern desktops, some degree of 3D acceleration is required. The compositor
These garish triangles seem to have become the default background. There’s plenty of others that are bundled with the desktop, if you prefer your eyes not to bleed.
can render using OpenGL 2.0 or 3.1 back-ends, or even the more CPU-based Xrender. Users of newer Nvidia cards have reported some tearing artefacts during video playback or gaming, but these can be fixed by disabling the compositor for fullscreen windows. There will be issues with the OpenGL back-ends for really old graphics hardware, but any modern integrated graphics will cope just fine, as will most graphics cards since the mid-2000s. So it may be worth investing £25 on eBay if your PCI-e slot is empty. Looking to the future, the OpenGL context can now be accessed through EGL rather than GLX, provided there is an appropriate driver. This will be essential for Wayland, but X.org will still be de rigueur for all distros for at least another year. There’s plenty of great Qt applications available, and many of these have been ported to Qt5. However, sooner or later you’ll come across one that hasn’t. Fortunately it’s easy enough to theme Qt4 applications so that they don’t look too out of place. This is almost true for GTK applications. The Settings panel does allow GTK theme selection, but we’ve yet to find a theme that exactly matches Breeze.
Historically, people have used the Oxygen-GTK theme here, but this is no longer supported by GTK3 and so is no longer an option. There are however Gnome-Breeze and Orion which look similar, but not identical. The Arc theme (https://github.com/horst3180/Arc-theme) definitely has flatness in common with Breeze, and is sufficiently pretty that you’ll forgive any inconsistency. We did run into some theming issues for certain heavyweight GTK applications (Firefox, LibreOffice and Inkscape), mainly relating to fonts in menu bars. Gnome applications, such as Gedit and Files, looked much nicer, however. And here concludes our treatment of a truly marvellous desktop (plus its underlying libraries and associated applications). If Unity has you yearning for horizontal taskbars, or LXQt/Mate have you yearning for whistles and bells, then this could be the desktop for you. Parts of Plasma 5 are still a work in progress, so you might run into the occasional unpolished edge, or Kwin-related crash, but these should not detract from what it is: a truly next-generation desktop that doesn’t forget all the previous generations. LXF
Convergence Among the many criticisms levelled at Gnome 3 and Unity, the most uttered is that these desktop environments force upon their users an interface that looks like it belongs on a touchscreen. Certainly both of these desktops have at least reasonable touchscreen support (both support multitouch gestures), but users actually making regular use of it are very much in the minority. Plasma 5 also has reasonable touchscreen support, but it’s immediately apparent that, at least it’s default state, it has been designed to
62 LXF206 January 2016
serve under traditional mouse and keyboard rule. Both Windows and Ubuntu have much to say on convergence – the idea that you can take your phone running the respective OS, plug in a display and some peripherals, and then will occur a strange prestidigitation wherein the OS will transform to make use of the extra hardware. Plasma 5 will eventually support convergence, but not at the expense of the traditional desktop experience. A great deal of work went into the development of Plasma Active, a mobile interface
www.linuxformat.com
based on KDE 4, and efforts to port this to Plasma 5 are underway, with the project now being called Plasma Mobile. This project is, in fact, heavily allied with Kubuntu. For what it’s worth, neither Windows 10 Mobile nor Ubuntu Touch are particularly polished, and until these mobile platforms are ready, any talk of convergence is, for practical purposes, largely moot.
A new place to buy games
Sign up today and receive a 15% discount with this code COMPARTS15
store.goldenjoystick.com This offer is valid until January 31 2016
The best new open source software on the planet Alexander Tolstoy offers a tasty side order of hot and spicy (free) sauce to go with this month’s rack of hand-picked open source apps for you to scoff down.
Mate GNU LibreJS Nuntius N1 Double Commander Deadbeef Lincity-ng Powermanga Cadubi Arista
Eiskaltdcpp
Desktop environment
Mate
Version: 1.12 Web: http://mate-desktop.org
M
odern Gnome is a desktop environment where the traditional desktop computing metaphor meets best practices from the mobile world of easy-to-use touchscreen devices. But, if we cast an eye back to the past, it wasn’t always this way, we remember when the previous generation of Gnome (2.x series) was dominating on Linux desktops, which helped bring it to an enterprise-level quality. Gnome 2 inspired Red Hat engineers to roll out their beautiful Bluecurve theme that was the first attempt to unify the look and feel of GTK-base and Qt-based applications.
This was a couple of years after, Novell triumphed with its innovative XGL desktop, which was also based on Gnome 2. There are also many other reasons why Gnome 2 matters and why many people miss the desktop despite the Gnome Shell (3.x) being also very good. Mate is a fork of Gnome 2, and a successful attempt to take over the
Mate’s default desktop layout is already familiar to millions of Linux users.
“Bringing the strengths of Gnome 2 into the contemporary world.”
Exploring the Mate interface... Main menu The essential Applications-PlacesSystem string gives access to all your apps, mounted partitions and configuration tools.
Desktop Place your files and folders right on the desktop. The default layout has the Computer, Home folder and Trash icons.
File manager Caja is a cousin of Nemo and both struggle to retain useful features that were eventually cut off from Nautilus.
64 LXF206 January 2016
Lower panel
System tray
The second panel doesn’t occupy much vertical space, but it’s useful to control running apps and switch between virtual desktops.
The tray is very similar to what we see in Ubuntu’s Unity desktop environment. You can populate this area with various indicators and extras.
www.linuxformat.com
development of the quickly-abandoned code of Gnome 2. Visually Mate is nearly identical to its predecessor; you can only set it apart by looking at the logo. Due to compatibility reasons, Mate avoids intersecting with Gnome 3 applications by renaming forked ones from Gnome 2. For instance, Nautilus here is called Caja, Gedit is Pluma and File Roller is Engrampa etc. It takes time to get accustomed to the new names but once you do you’ll find yourself being very productive with this familiar, practical and blazingly fast desktop. Mate is less about a nostalgic walk towards a dead end and more about bringing the traditional strengths of Gnome 2 into the contemporary world. This shows in a number of things, such as: its maturing integration with GTK3 (version 3.18 is now supported); better handling of multi-touch devices; stable and solid work in multi-monitor configurations; Systemd support and its numerous improvements and fixed usability (such as indicators, applets and sessions management). Mate development is backed by the mighty Linux Mint team that ships Mate as the officially supported desktop (along with Cinnamon) in its distribution (distro). That’s a simple way to try Mate, but it’s also widely available for many other distros, including Debian, Fedora, OpenSUSE, Arch Linux and many others. See the official guidelines on the project’s website.
LXFHotPicks Firefox extension
GNU LibreJS
Version: 6.0.10 Web: www.gnu.org/software/librejs
E
ach month in HotPicks, we’re bound by the limitation to review only open source software and only once the licence is OK and we have the source tarball does an application qualify for the onceover. But how far can real freedom stretch? LibreJS is an outstanding example of code liberation without constraints. The idea, which is supported by Richard Stallman, is to give you more freedom while you’re surfing the internet. Back in 2009 Richard Stallman wrote an article in which he explained that we may run non-free JavaScript code without even knowing it. To address this issue GNU LibreJS was developed. It’s a Firefox and GNU IceCat extension that blocks non-free, non-trivial JavaScript while allowing JavaScript that’s free and/or trivial. Getting GNU LibreJS to work is as simple as going to https://ftp.gnu. org/gnu/librejs/librejs-6.0.10.xpi and
allowing installation. Once you do this, a LibreJS icon will appear on the add-on toolbar (in Firefox it sits in the upper right corner of the window). Each time LibreJS detects what it thinks is non-free Javascript code, it will automatically disable it and display a special vertical pull-tab with a code that LibreJS can make a complaint on. LibreJS makes it easy to complain by heuristically finding where to send the complaint by detecting contact pages and email addresses that are likely to be owned by the maintainer of the site. You may also whitelist domains and subdomains in the Preferences of the add-on so that it bypasses a
Heed the words of Stallman – don’t let Web 2.0 websites fool you with suspicious JavaScript code!
“LibreJS an extension that blocks non-free, non-trivial JavaScript”
JavaScript check. Sometimes it is also possible to whitelist certain pieces of code and re-enable some functionality that LibreJS has blocked. In practical terms, the resulting experience of LibreJS actions is odd as it creates non-working online videos, broken field auto-completion and blocks many more interactive and helpful features. But if you value freedom most of all, as Stallman does, then this is the price you’ll have to pay.
Notification utility
Nuntius
Version: 0.2 Web: https://github.com/holylobster
D
esktop computers and smartphones seem to be merging ever closer together as the former adopt best practices of the mobile world and the latter grow in terms of horsepower. The idea of bringing Android and desktop Linux a little closer has been mooted for years, perhaps since Apple integrated iOS notifications into OS X, and the first approach was called KDEConnect. Despite its name it can work in Unity and Cinnamon (an extra appindicator is required). But KDEConnect pulls a lot of KDE-related dependency, which isn’t really that helpful. This time then we’re looking at the alternative solution called Nuntius which was designed specifically for Gnome and its new notification system that first appeared in version 3.16. In a nod to that, the new design offers heads-up style notifications combined
with Gnome’s calendar in the centre of the top panel. Nuntius consists of two parts: the desktop integration package and the Android app (search for ‘nuntius’ in Play Market). Once you install both, you’ll have to grant standard rights on the Android side and make sure that Nuntius notifications are turned on in Gnome on the desktop Linux side (check out the Notifications section in Gnome’s system settings). Unlike KDE-Connect, which relies on Wi-Fi, Nuntius will want to establish a Bluetooth pairing between your smartphone and desktop. Once you’re done with pairing you’ll notice the
You can leave your smartphone in your jacket and not miss an important call or the next internet meme.
“Notifications on the Android side will appear on your Gnome desktop.” www.techradar.com/pro
‘Running with 1 connection’ string in Nuntis on Android. From this moment on, incoming calls, text messages and all other notifications pushed out on the Android side will appear on your Gnome desktop, and in our experience we’ve found it works marvellously! Even though Nuntius is in early stage of development, it already whistles the standard Android sound when you receive an SMS and plays it through your desktop speakers. The only real requirement for the whole thing (apart from having an Android-powered handheld) is a Bluetooth adaptor.
January 2016 LXF206 65
LXFHotPicks Email client
N1
Version: 0.3.20 Web: https://github.com/nylas/n1
N
1 is a remarkable email client that stands apart from big players, such as Thunderbird or Evolution. The first peculiarity is that you’re welcome to build it from source but if you want a pre-built package, you’ll have to request an email invitation code and be a bit patient. Later on it’ll become clear that you can’t skip the code request, as once N1 is launched it needs that code to be entered in the only input field of first-time setup wizard. As you’d expect though the process is free. The second main difference is the way N1 works. Unlike convenient email clients that download your inbox via POP3 or IMAP, N1 connects to its own Nylas Sync Engine server and listens to it. All authentication and data flow is performed on the server side (which is open source too). The client is basically a web page that’s rendered each time it
detects changes, and the whole construction is very flexible and extensible. N1 also seems to be the only email application with built-in developer tools for writing new plugins, which are all found in the Developer menu. If you know, or want to learn, Jasmine and Coffee read the docs at https://nylas. com/N1/docs. For lesser mortals, N1 is a classic desktop applications that supports Gmail, Yahoo, iCloud and Microsoft Exchange services and provides a very friendly wizard to set up your accounts during the first run. Currently, N1 only has a few available plugins, such as compose email translator, templates
We particularly liked the light/dark theme switcher.
“N1 seems to be the only email application with built-in dev tools.”
manager, mail rules, phishing detector and some others. We tested the client using a Gmail account and found N1 to be extremely responsive and very stable. The Nylas project has very good documentation too, which is primarily targeted at developers but it’s still useful for normal users, eg you can learn how to run your own local Nymas Sync Engine in case you need more control over your privacy.
File managers
Double CMD
Version: 0.6.6 Web: http://doublecmd.sourceforge.net
O
ld habits die hard: just go look at how many Windows users still run a twin-panel file manager on top of a modern material design UI. While white-bearded men stick to acid-blue Far Manager, others sanely prefer Total Commander, which is a Swiss Army knife for handling files. When tired of laggy performance on Windows, people come to Linux and soon want to have a native clone of Total Commander for Linux. We have one and it’s called Double Commander. It shows a two-panel detailed view of your file system, so you can place the source in the left and the target on the right (or reversed) and then copy or move your files between the two. Traditional key bindings are carefully retained allowing you to copy files with F5; move with F6; create a new directory with F7 and delete with F8.
66 LXF206 January 2016
Double Commander has its own built-in text editor with syntax highlight and line numbers, which you can invoke with F4. There are so many features in Double Commander that it’d take pages to review everything but here are some highlights. There’s a dedicated tool for bulk renaming which works for both files and directories and supports masks. Double Commander also has a very handy queuing tool for manually reordering file operations and changing their priority. Crucially, it does all jobs in the background letting you keep surfing while something is copying/moving or being deleted. Additionally, you can
Another way to get all your files and folders in order is to use the Double Commander file manager.
“Double Commander has a dedicated tool for bulk renaming.” www.linuxformat.com
extract and compress files in archives transparently, as if you’re working with regular directories. Another good feature is its support for Total Commander plugins that are widely presented on the Internet in WCX, WDX and WLX format. Currently, Double Commander has two different interfaces (GTK2 and Qt4) and is included in many Linux distros. The official website always has the most recent builds in RPM and Deb as well as a portable build, which will run on any system.
LXFHotPicks Direct Connect client
Eiskaltdcpp
Version: 2.2.10 Web: http://bit.ly/eiskaltdcpp
F
or years file sharing has been one of the core elements of what people do on the internet, and it’s resulted in versatile technologies for getting in touch and sharing something. Together with BitTorrent, Direct Connect (DC) is one of the most recognised and widely used methods. DC is a peer-to-peer file-sharing protocol where users connect to a centralised hub and download files directly from each other. When combined with IRC-like chat, it can form the foundations of alternative social networking and is primarily used within local communities as the download speed is higher when two users physically reside in the same area. Eiskaltdcpp is a contemporary and feature-rich DC client, and a successor to the once famous Valknut. Under the hood Eiskaltdcpp supports DC and ADC (advanced DC) protocols. It runs as a daemon with a connecting
interface. It can also download files in several streams; automatically recognise a router’s UPnP settings; auto-update an external IP address via DynDNS and block spam. The application’s exterior exists in two versions: GTK2- and Qt4-based, but we found the Qt4 version looks and feels better under Gnome or Unity. To start using it you’ll need to set your user name, default Downloads directory and other essentials in the Preferences window. If you’re behind a router, you must specify TCP and UDP ports that are forwarded to your local network under the Connections section and finally select something that you want to share under the Sharing
Join the chat and find new friends within seconds. It’s much easier than Facebook and more private.
“A feature-rich DC client, and a successor to the famous Valknut.”
Music player
section. Certain hubs may require a minimum amount of shared data in order to connect to them, often something like 3-10GBs. As always, take care over the legal status of your shared files, don’t share commercial software or copyrighted content. After that press the ‘Quick Connect’ button on the toolbar and enter the name of the hub. There are hundreds of public hubs that you can easily find with a simple Google search. Once connected, read the header in the chat panel, as it often contains instructions to register on the hub with your username.
A smart and self-contained music player with advanced file format support.
Deadbeef
Version: 0.6.2 Web: http://deadbeef.sourceforge.net
T
he cottage industry around the development of new music players for Linux continues, and this time we’ll take a look at a Deadbeef. The name may sound strange, but it’s just a magic hex number that spells a word, in this case 0xDEADBEEF. Visually, the player looks neat and compact, it has playback and volume controls on the top and a playlist area below it, welcoming a user to drag and drop music there. There are some interesting features that make Deadbeef distinctive, which you may want to try. First, it automatically splits CUE files to tracks and enables gapless playback for FLAC, APE, TTA, Ogg Vorbis, Wavpack, WAV, MPAC and ALAC formats. Unlike many other players that cut off silence at the start and end positions to achieve the gapless effect, Deadbeef plays the exact number of
samples that are stated in files. This is useful for audiophiles who rip CDs into files and need accurate playback everywhere. Another outstanding feature is that Deadbeef neither depends on GStreamer nor systemwide FFmpeg nor MPlayer libraries, instead it’s bundled with its own set of decoders and would make a good variant for minimalistic Linux installs. The Deadbeef interface is quite modest, but the playlist area supports tabs and allows you to keep several playlists open at a time. Apart from 16-band equaliser, which can be enabled under the Playback menu,
“Splits CUE files to tracks and enables gapless playback.” www.techradar.com/pro
there are no visual extras, but once you go to the Preferences section, you’ll find lots of configurable options. Most of the plugins (*.so in /usr/lib64/deadbeef) can be configured and there are some useful tweaks, eg you can change the buffer size in the PulseAudio output plugin, change the OSD notifications template and set a custom address for CD database address and much more. Deadbeef is a dark horse and isn’t a widely available so getting it installed my require adding a third-party PPA, eg for Ubuntu it’s ppa:starws-box/ deadbeef-player.
January 2016 LXF206 67
LXFHotPicks HotGames Entertainment apps City simulator
Lincity-ng
Version: 2.9 beta Web: https://github.com/lincity-ng
D
o you feel you ought to be in control of most things? Then put your organisation skills to the test and try managing a whole virtual city? In Lincity-ng you’re committed to building, developing and maintaining a city and your end goal is to achieve a sustainable economy. The game is an improved and polished version of the older Lincity, which in turn is a clone of the famous classic SimCity. In the latest game you can build a city from scratch or start with a scenario where there is already a city with some issue or other, such as a lack of food, depopulation, insufficient power supply and so on. Your job is to fix it and manage the budgets, because city funds can be easily frittered away.
The graphics will make SimCity veterans smile; the panels, menu and control buttons look very child-like and reminds us of Tuxpaint. In the game itself, there’s been improvements to texture quality and visual effects. Trees, highways, plants, power lines, cars and other objects have been carefully redrawn and now look more realistic. However, the game is playable on basic graphic cards, and integrated Intel HD chips, but as your city grows Lincity-ng may start lagging a bit, because it has to render more and more textures. Of course, Lincity-ng still needs your
The power plant fume is rendered so beautifully that we forgot that it’s pollution.
“A very good attempt to recreate the gameplay of the original SimCity.”
imagination. For instance, road traffic is not shown as a live movement, but displayed in a special chart. Aside from graphical fancies, Lincity-ng is a very good attempt at recreating the mechanics and gameplay of the original SimCity game. The easiest install method for Lincity-ng is to compile it from source, as there are very few pre-built packages for the latest release other than for Ubuntu 15.10.
Arcade shooter
Powermanga
Version: 0.93 Web: http://bit.ly/Powermanga
Y
ou may remember the Astromenace game, we featured previously in Hotpicks [see p64, LXF192]. The game was a classic vertical scrolling arcade shooter, where you surfed space in your ship fighting aliens and avoiding asteroids. Astromenace felt like a modern game thanks to its rich graphics and various effects, but there’s also another game with a nearly identical plot but different style, Powermanga. The game has a long history with roots in MS-DOS games of the 1990s, but we’re mostly interested in it since November 2000 when it became open source. The game’s maximum screen resolution is 640x480 and system requirements are very modest: Powermanga is happy to run on an Intel 386 CPU and 32MBs of RAM.
68 LXF206 January 2016
The action proceeds through 42 levels, each harder than a previous one. Levels are designed in such a way that you first enter small skirmishes, then fight with a fleet of enemies followed by a jaunt through asteroid fields. Some enemies leave power-ups and upgrades that equip your ship with extra shields and arm it with more advanced weapons, like a missile launcher. Every four levels a giant monster vessel bars your way and you’ll need to destroy it. The menu system in Powermanga is extremely simple. You can’t save your progress, but some options can be passed to the game if you launch
They may look like bees and baby toys, but they’re actually blood-thirsty aliens!
“Every four levels a giant monster vessel bars your way.” www.linuxformat.com
Powermanga from the terminal. Using $ powermanga --hard will make the game a real challenge (by default the --easy flag is used). Other flags you can use are: --window for windowed mode or --nosound to make the game silent. We recommend playing Powermanga in hard mode, as it significantly ups the thrill factor and keeps you wanting to complete all the levels. Getting the game installed is a piece of cake as after decades of being open source, it’s in all major distros.
LXFHotPicks Drawing tool
Cadubi
Version: 1.3-2 Web: https://github.com/statico/cadubi
C
adubi stands for Creative ASCII Drawing Utility. It’s written in Perl and is, if you haven’t figured this out, designed for drawing text-based images that are viewable on a typical Unix-based consoles. Usually the applications that emulate these consoles support various text modes, such as background and foreground colours, bold and inverse. This text art, which is commonly called ASCII art, is used primarily for fun or showcasing your turbo-charged shell prompt to friends. In the past, it was commonly used in online bulletin boards (BBSes) and text-based email applications etc. In fact, if you liked the Escape the GUI feature [see p52 LXF197], Cadubi would be a perfect companion to the selection of applications that we mentioned. While there are some utilities that are able to transform bitmaps into ASCII art, Cadubi significantly simplifies
the process. Once you run $ cadubi , you’ll see a blank field and a cursor. We’d suggest the next step should be to hit Ctrl+h and examine the very handy Cadubi keys legend. It describes the simple drawing tool, which can paint on the background and on the front. Cadubi uses a pen, which describes the mode you’re using. Its properties are the painting character, foreground colour, background colour, bold, inverse, and blink. To select the background colour, you need to press b and then enter the code of 0-8, according to the legend. The front colour can be selected using the f key and the symbol it paints is changed after pressing p. There are also a few more options, such
Colour blending would be a nice addition, but even without it Cadubi is a gorgeous tool.
“Experiment with amateur ASCII art in minutes.”
as toggle pen bold (g), or toggle text mode (t), which are easy to remember. In our experience, you’ll find yourself hitting arrow keys and the space key to experiment with amateur ASCII art in minutes, and the whole experience can feel very zen. Cadubi also enables you to save results in a plain text file (Ctrl+o), which will be correctly displayed either by Cadubi itself or many of Unix shell commands, like cat .
Conversion tool
Arista
Version: 0.9.7 Web: http://transcoder.org
A
rista is a small GTK-based application for converting and transcoding one media to another, eg ripping DVDs to a file or optimising existing video files for certain popular devices, such as iPod or Playstation. Arista is a small utility; it’s just a set of Python scripts and GTK bindings, but it manages to have a great set of features. This is achieved by outsourcing all stuff regarding file formats and support for external devices to general Linux and Gnome components, such as GStreamer and udev. On one hand this keeps Arista compact and robust, but it also ties it to Gnome or other desktop environments which share components with it. The elegance of Arista is the way it makes complex things simple, focusing on the devices you wish to play the media on. The application is designed
to be used by people who aren’t familiar with audio and video encoding and want an easy way to get multimedia to their devices. Once started, hit the ‘Create conversion’ button and select source, which can be your optical drive, V4L-compatible device (webcam, scanner), a file or a folder (such as a local copy of DVD contents). The field just below the button should be filled with the destination directory and once you’re done you can select the desired preset from the dozen listed in the lower part of the conversion editor. Take note of the button along the bottom and press the plus (+) symbol to create a new preset
Arista enables you to convert videos from 4K to something that will work on your smartphone.
“Arista showed that it still works just fine and looks good www.techradar.com/pro
or press the ‘info’ sign to edit the currently selected one. The default list of presets are a sensible section and include target formats for Apple devices as well as for ageing Nokia Maemo/Meego smartphones of pre-Windows era, and, as you’d expect, open source codecs (Ogg Vorbis and Theora etc). Arista may be a little outdated, but it includes more recent presets and even has a dedicated button to fetch the latest version. In our tests Arista showed that it still works just fine and looks good too. LXF
January 2016 LXF206 69
Get into Linux today!
Issue 205 December 2015
Issue 204 November 2015
Issue 203 October 2015
Product code: LXFDB0205
Product code: LXFDB0204
Product code: LXFDB0203
In the magazinee
In the magazine
In the magazine
We howl at the perfect form of Ubuntu 15.10, pretend to review lots of video players by s watching our old movies and take Unity for a spin. Plus we show you how to get gaming in Linux and coding in Lua.
Stream it! Build the best Ubuntu media centre. Sync it! Our Roundup of the best synchronisation tools. Code it! Use Glade to design a lovely GTK interface. Er… Blend it? How Blender is taking Hollywood by storm.
LXFDVD highlights Ubuntu 15.10 32-bit & 64-bit, Kubuntu 15.10 and more.
LXFDVD LX XFDVD highlights Ubuntu 15.04, Kodibuntu 14.0, Emby, OpenELEC and more.
Our definitive guide to every key Linux distro (that you can then argue over with your mates), the best filesystem for you, plus inside the Free Software Foundation, a swig of Elixir, Kodi 14.2 on a Pi and Syncthing.
Issue 202 September 2015
Issue 201 Summer 2015
Issue 200 August 2015
Product code: LXFDB0202
Product code: LXFDB0201
Product code: LXFDB0200
In the magazine
In the magazinee
In the magazine
Improve your code and becoming a FOSS developer with our Coding Academy, plus the best Chromebooks of 2015, the inner workings of WordPress and a nice chat with Nginx’s Sarah Novotny.
LXFDVD LX XFDVD highlights
UberStudent 4.1, WattPS R9, OpenMediaVault 2.1 and more.
With the release of Windows 10, Linux goes toe to toe with the Redmond OS to see which wins out. Also this month: the best server OS for you, EFF’s definitive privacy guide and getting into LaTeX.
LX XFDVD highlights
AntiX 15-V, Mageia 5, 4MLinux 13.0, Clonezilla 2.4.10 and more.
Celebrating our 200th issue, we chart 15 years of covering Linux, serve up 200 of the top Linux tips, uncover classic interviews from open source giants as well as our usual bevy of tutorials and roundups.
LXFDVD highlights
Mint 17.2 Cinnamon, OpenSUSE 13.2 KDE, Bodhi 3.1.0 and more.
LXFDVD D h highlights hl h
Fedora 22 Workstation, Ubuntu 15.04 (32-bit), Sabayon 15.06.
To order, visit myfavouritemagazines.co.uk
Select Computer from the all Magazines list and then select Linux Format.
Or call the back issues hotline on 0844 848 2852 or +44 1604 251045 for overseas orders.
Quote the issue code shown above and have your credit or debit card details ready
GET OUR DIGITAL EDITION! SUBSCRIBE TODAY AND GET 2 FREE ISSUES*
Available on your device now
*Free Trial not available on Zinio.
Don’t wait for the latest issue to reach your local store – subscribe today and let Linux Format come straight to you.
“If you want to expand your knowledge, get more from your code and discover the latest technologies, Linux Format is your one-stop shop covering the best in FOSS, Raspberry Pi and more!” Neil Mohr, Editor
TO SUBSCRIBE Europe? From only €94 for a year
USA? From only $116 for a year
Rest of the world From only $123 for a year
IT’S EASY TO SUBSCRIBE... myfavm.ag/LinuxFormat CALL +44 (0)1604 251045 Lines open 8AM-9.30PM GMT weekdays, 8AM-4PM GMT Saturdays Savings compared to buying 13 full-priced issues. This offer is for new print subscribers only. You will receive 13 issues in a year. If you are dissatisfied in any way you can write to us to cancel your subscription at any time and we will refund you for all un-mailed issues. Prices correct at point of print and subject to change. For full terms and conditions please visit myfavm.ag/magterms. Offer ends 31 January 2016.
www.techradar.com/pro
January 2016 LXF206 71
Gaming Install and set up Games for organising your Linux games collection
Linux gaming: Organiser app
Bring the games you’ve installed from various sources together in an easy to use Gnome 3 app. Matt Hanson shows you how.
G
Our expert Matt Hanson
loves playing games on any platform – as long as it’s fun and lets him put off real work, he’ll play it.
aming on Linux is getting easier than ever before with services, such as Steam, offering you easy access to hundreds of new and classic computer games. While the growing number of services and websites that offer Linux compatible games is certainly welcome – allowing an increasing number of gamers to move from Windows – it can mean your games collection can become unruly with no consistent way to find and launch them. Not only are games saved in different folders and hard drives, but some require you to launch them through another software platform or interface, such a Steam. If you run emulators then things get even trickier with the games being saved as ROM files that require specific emulator programs
Games is a handy application in early development that brings all your games together in one interface.
to open them. This can make launching your games, as well as keeping track of what you have installed, frustrating. However there’s an excellent app called – rather appropriately – Games 3.18.0 that brings all of these games together in one easy to use interface. Every game you own (as long as it’s supported by Games, which we’ll come to in a bit) can be found and launched from within the application, and any additional software that the games need to run are launched as well. It works in much the same way a music player, such as Rhythmbox, does: it scans your hard drive for the appropriate file types and then runs them, along with any codecs or compatibility plugins to make them run smoothly. So as you don’t need to change media players when you want to play an MP3 or OGG file, it’s the same with Games. It’s a solution that on the surface seems quite simple, but in the background Games isn’t only executing the game but also configuring the sound and visual outputs, inputs (such as game controllers) and running wrappers and emulators. The end result for the user is a streamlined and simple interface. Many emulators also allow you to save the state of the emulator, effectively pausing the game. You can then resume the game and start where you left off next time you want to play. This has been a welcome feature especially for games which don’t support save files, and thanks to the Libretro API (which is used by many emulators, as well as Games), you can suspend and resume games directly from the Games application. It’s little details like these that makes this a great tool for organising your games library.
What is Steam? You’ve probably heard us mention Steam in these pages before. It’s a platform created by Valve (perhaps best known for the Half-Life game series) and is a store front for hundreds of games. At its heart, it’s a method of DRM and although we’re not huge fans of that, it does come with loads of cool features such as cloud
72 LXF206 January 2016
saves, which enable you to save your progress on one machine and pick up where you left off on another, and in-home streaming. This feature is in its infancy at the moment, but already allows you to stream games from one PC over your home network to another PC. At the moment you can’t stream from one Linux PC to another, though
www.linuxformat.com
streaming from a Windows PC to a Linux PC is possible. Another reason why we’re so keen on Steam is that it has been working hard on increasing the number of Linux-compatible games it sells. Traditionally games you buy through Steam need to be launched through the application, but you can now use Games instead.
Linux gaming Tutorial How to install Games 3.18.0
1
Does your distro support Games?
2
Before downloading Games you’ll want to check that your distro is supported. Currently only two distros are: Arch Linux and Fedora. However as long as you can run Gnome then you should be good to go – though it might take a little longer to figure out solutions if things go wrong on an unsupported distro.
3
Install JHBuild
Next type in: mkdir ~/jhbuild cd ~/jhbuild git clone –depth=1 git://git.gnome.org/jhbuild cd ~/jhbuild/jhbuild ./autogen.sh –simple-install
Games can handle a wide variety of game file types and in the future the list of supported games will grow. At the moment regular games installed natively on Linux are supported, as well as games installed through the Steam for Linux client. Games running on the LÖVE and Doom engines are also supported, as well as emulated games for the NES, SNES, N64, GameCube, Wii, Amiga, Game Boy, PC-Engine and more.
What’s supported?
If you head over to the official wiki (https://wiki.gnome.org/ Apps/Games/Roadmap) you’ll see a road map for proposed features that will be added in future updates, such as support for the Xbox 360 controller, PlayStation platform support and integration with the Gnome Shell to allow you to search for games. One upcoming feature that’s being worked on at the moment is plug-in support for game types, making it easier to maintain. You wouldn’t need to install compatibility
Prepare for JHBuild
The creators of Games recommend installing the app through the JHBuild tool, so to that you’ll need to get the right programs and tools. In Fedora type sudo yum groupinstall “Development Tools” and then sudo yum install docbook-style-xsl libxm12-python to get the tools you need to download and install JHBuild.
4
Getting the files for Games
You can get the files needed to install Games directly from https://wiki.gnome.org/Apps/Games/Download. You’ll need to download gnome-games-3.18.0.tar.gz as well as the required libraries retro-gobject-0.4.tar.gz and retro-gtk-0.4-fixed.tar.gz. It also lists a number of recommended plugins to make it work with emulators, such as retro-plugins-game-boy-0.4.tar.gz.
files for games from platforms you’ll never use, so it should make the Games app more lightweight as well. Improved metadata is also on the roadmap, which will allow you to sort and organise your collection much more easily – with support for covers and icons that will make browsing your collection more attractive. We take you through the steps for downloading and installing Games (see above), and if you like what you see then why not consider contributing to the project by lending a hand by doing some coding, design work, game testing and writing the documentation needed for the project? You can contact the creators in the #gnome-games room at Gnome IRC server (irc://irc.gnome.org) and you can file bugs on the GitHub page (https://github.com/Kekun/ gnome-games/issues). As the Games application is still being worked on you could encounter some bugs, but it’s already in a usable state for helping to get your games collection organised. LXF
If you missed last issue Head over to http://bit.ly/MFMissues now! www.techradar.com/pro
January 2016 LXF206 73
Imaging Use Fog to create a base image and upload it ready for deployment
System image: Clone & deploy Mayank Sharma shows you how to image and rollout several computers.
M
Our expert Mayank Sharma hasn’t
worked as a sysadmin as he’s too busy writing about how to set up and manage Linux for all kinds of tasks on many systems for LXF’s loyal readers.
The Fog server’s webbased dashboard makes the management of complex network deployments easy.
anaging a network of computers is an involved process. Before you can tackle the problem of actively monitoring the machines, you have to install an operating system on each one of them. This is a time-consuming task even for a small network with about 10 computers. Computer cloning involves setting up the operating system, drivers, software and data on one computer, then automatically replicating the same setup on other computers. This technique, known as ghosting or imaging, is used by system administrators for rolling out multiple identical machines over the network without much effort. Fog, which we’ll use here, is one of the most popular open source cloning systems. To use Fog you, first, need to setup an imaging server. The project officially supports several Ubuntu, Fedora, Debian, and CentOS releases, but it’s known to work on other distributions (distros) as well. Before installing Fog make sure the server has a static IP address, which you can ensure from your router’s admin page. For this tutorial we’ll assume our Fog server is at 192.168.3.51. Also ensure that all the machines in your network are configured to boot from the network card. Finally, make sure you disable any existing DHCP servers on the network as we’ll set up the Fog server as a DHCP server and dole out addresses to all the computers on the network. Once you have your network set up, head to the machine that’ll be your deployment server and download the latest stable Fog release (http://sourceforge.net/projects/ freeghost/files/FOG). Then fire up a terminal and extract the downloaded tarball with tar xvf Fog_1.2.0.tar.gz -C /opt then change into the bin directory under the extracted tarball, and fire up the installation script with: sudo ./installFog.sh The installation script will prompt you for several bits of information such as the version of Linux you’re running it on,
74 LXF206 January 2016
www.linuxformat.com
Fog depends on several mature open source tools, such as partclone to image a computer.
the type of installation, the IP address of the server, the router and the DNS server and whether you’d like to setup the Fog server’s own DHCP server. In most cases, it’s best to go with the default options suggested by the installer, but make sure you enter the correct IP addresses for the server. The script will install various required components. When it’s done it’ll display a URL for Fog’s dashboard (such as 192.168.3.51/fog/management). Open the link in your web browser and log in with the default credentials (fog:password). On initial launch you’ll have to load the default settings into the server’s database by clicking on the button on the page. The first order of business when you are at the proper admin dashboard is to create a new user. To do this head to User Management > Create New User.
Creating a base image
Now that our imaging server is set up, we’ll use it to image a computer. Once a computer has been imaged we can then deploy that image to other computers with a single click. To begin the process, fire up a browser on the imaging server and head to Fog’s dashboard and log in with the default credentials. Then head to Image Management > Create New Image. Use the fields in the form to describe the image, eg, let’s assume we are creating an image of Fedora Workstation 22 installation that we’ll then install on all our computers in the Science Lab. So we can name the image ‘Fedora for Science Lab’. Next, use the Operating System pull-down menu to specify the operating system of this image, such as Linux. Finally, select the correct disk layout scheme from the Image Type pull-down menu. Our Fedora installation is on a single disk with multiple partition so we’ll select the second option. Now assuming you’ve already installed Fedora on one of the
Imagining Tutorial Advanced Fog features Fog is a complex piece of software and while we’ve covered the core feature of the server, it ships with several more. The Fog server is scalable and can manage large networks spread over multiple locations in the same building or around the world. It allows you to arrange hosts into several groups for easier management. One of the most useful features of the Fog server, especially for admins of larger networks, is the multicast ability. Using this feature you can deploy multiple machines in one go. However, to use this feature successfully you’ll need to make sure your Fog host has enough
computational and network resources to stream multiple images simultaneously. For such larger networks, you can have multiple Fog installations configured as storage servers. These storage servers share images and take the load of the main Fog server when imaging computers. The distributed storage servers also speed up unicast transfers and introduce data redundancy. Besides the two most important Fog server tasks that we’ve covered in this tutorial (and that’s uploading and downloading images), you can create several different tasks for any of the hosts in Fog’s repository. For instance, you
can run the Debug task which boots a Linux image to a Bash prompt for fixing any boot errors. You can also create a task to remote wipe hosts, to recover files with TestDisk or to scan for viruses with ClamAV. The Fog server can also install and manage printers on the network. Depending on the operating system on the host you can also use the server to track user access to computers by their Windows usernames and automatically log off users and shut down the computer after a specified period of inactivity. Fog can also install and uninstall apps via snapins.
You can deploy and image your computers by accessing Fog dashboard from a mobile device like a tablet.
computers on the network, head to that computer and boot it up. Since the computer is set to boot from the network card, it’ll display the PXE boot environment from the Fog server. Scroll down the Fog menu and select the ‘Quick Registration and Inventory’ option. The Fog server will now scan the computer and add it to its repository of known hosts.
Uploading an image
When it’s done, shutdown the Fedora computer and head back to the Fog server. Fire up the dashboard and head to Host Management > List All Hosts. The Fedora server will be listed here. By default, Fog identifies each host by its MAC address. You can change it to something more meaningful (like ‘Fedora 22’) by clicking on the ‘Edit’ icon. Here you can change its name and add a brief description to identify this computer. Most importantly, use the Host Image pull-down menu and select the Fedora 22 image you created earlier. Now that our basic framework in ready, it’s time to image the installed Fedora installation. Head back to Task Management > List All Hosts which will list your rechristened Fedora 22 installation. Under the Task section corresponding to this image, click on the green upload arrow. Fog will give you multiple options to schedule the upload task. You can explore the options after clocking some mileage with Fog but for now it’s best to go with the default option for instant deployment. Then head back to the Fedora machine and boot it up. It’ll again detect Fog’s PXE and automatically image the machine and upload it to the Fog server. The process will take some time depending on the size of the disk it has to image, the processing capabilities of the computers involved and the speed of the local network. The Fedora computer will restart once it’s done uploading the image. You can now use Fog to deploy this Fedora image
Once a host is registered you can query its hardware and get compatibility information before imaging it.
on all the lab computers with a single click! You can similarly image any other computer on the network, including the new Windows 10 installations. Before you can deploy an image, you need to register the targets machines as hosts with the Fog Server. The registration process is the same as before. Boot the new computer from the network which should detect Fog’s PXE environment. When it does, select the ‘Quick Registration and Inventory’ option. When you’ve added the computer to Fog’s repository of known computers, login to the Fog dashboard and head to Host Management > List All Hosts. Click on the ‘edit’ icon corresponding to the newly added machine and rename it so that it’s more identifiable, something like Lab PC #1. Again, remember to use the Host Image pull-down menu to select the Fedora 22 image that we’ve just imaged from another computer. Repeat the process to register all the computers in the lab with the Fog server. Then edit them in the Fog dashboard to give then an identifiable name and select the Fedora image as the host image. Now we need to replicate the Fedora image on to the other lab computers, which we do by heading to Task Management > List All Hosts. Browse through the list of hosts to find the entry for the computer you wish to deploy to and select the corresponding down arrow ‘Deploy image’ option. After the deploy task has been created, head to the lab computer and power it on. It’ll automatically detect the task from the Fog server and copy the image from the server on to the local machine. When this is done, you’ll have a mirror copy of the Fedora installation on the Lab computer. Finally, you need to repeat the process to deploy Fedora on other Lab computers as well. LXF
Save money, go digital See www.myfavouritemagazines.co.uk/linsubs www.techradar.com/pro
January 2016 LXF206 75
RAID How to set up, troubleshoot and monitor different types of RAID
RAID: Create & manage arrays Avoiding puns about insect sprays, Neil Bothwick shows you how to make multiple disks fly and how to fix them when things go wrong.
Our expert Neil Bothwick
has a great deal of experience with booting up, as he has a computer in every room, but not as much with rebooting since he made the switch to Linux.
Using mdadm --detail gives plenty of information about an array, /proc/mdstat is more succinct.
L
ast month we looked at using LVM [see Tutorials, p72, LXF205] to manage partitioning and multiple disk drives. There is another technology used when multiple drives are involved called RAID. To give it it’s full name, Redundant Array of Inexpensive Disks combines multiple disks into a single block device to give extra capacity or redundancy in case of failure. The simplest form of RAID is two disks in what is called a RAID 1 array. In this case the two disks are mirrors of each other. All writes take place to both disks (buffering makes sure this does not impact performance) while reads are performed from whichever
A testing setup If you want to experiment with RAID without touching your hard drive partitions, you can create some virtual disks like this. $ for i in {0..3}; do $ dd if=/dev/zero of=diskfile$i bs=1 count=1 seek=10G $ losetup /dev/loop$i diskfile$i
76 LXF206 January 2016
$ done This creates four files then creates block devices that access those files. Now you can use /dev/loop0 to /dev/ loop3 in place of the disks in the examples. You’ll lose the loop devices if you reboot, but not the files, so just run the losetup command again.
www.linuxformat.com
drive is able to serve the data the fastest. This means you get a slight boost in read performance, no noticeable difference in write performance and the same capacity as with only one disk. Where you benefit is in data security. Because everything is written to both disks, if one should fail your data is still available from the other. Not only that, but when you remove the faulty disk and add a new one to the array, data will be automatically copied to it in the background so you regain the security of two copies of everything as quickly as possible.
Types of RAID
There are various levels of RAID, which describe the way in which data is spread across the various disks (see RAID levels in brief). There are also three different ways of implementing RAID: Hardware RAID, Software RAID and FakeRAID. Hardware RAID, as its name implies, is implemented entirely in hardware, either via a controller card or on the motherboards of some server systems. No matter how many disks you connect to a hardware RAID, you only have one show up in the OS. Hardware RAID is fast but has two drawbacks: it’s expensive and it uses its own disk format. That means that if your controller fails you will need a compatible replacement to be able to read your disks. Software RAID does everything in software and is implemented in the Linux kernel. With modern hardware, the performance is similar to hardware RAID but far more flexible in the control it offers, and the ability to read disks on a different system. This is what we are looking at here. If your motherboard claims to support RAID but didn’t cost hundreds of pounds, it’s most likely to be what is called
RAID Tutorial RAID levels in brief There are several ways of combining disks to form a RAID array, each of which have their own advantages and drawbacks. In these descriptions, N refers to the number of devices in the array and S is the size of each. The commonly seen levels are: RAID 0 Not really RAID, as it just joins several disks together. You are better off using LVM for
this. No resistance to failure. The total size N * S. RAID 1 This is two or more disk with all data mirrored across all disks. It tolerates failure of N 1 disks, total size is S. RAID 5 A RAID of three or more disks with data and parity information distributed so that any single drive can fail with no loss of data. The total size is (N - 1) * S.
‘hardware assisted software RAID’ or fakeRAID. The controller has just enough RAID capability to load a driver from the disks, then it becomes software RAID. This generally only works with Windows. We have referred to disks several times, but software RAID can work with any block device, and is often implemented at partition level rather than disk level. Enough talking, let’s create a RAID 1 array on /dev/sda3 and /dev/sdb3, change the devices to suit your system. These should be spare partitions, or you can use image files (as described in the A Testing Setup box, p76). As we are working with devices files in /dev, you need to be root, so either open a root terminal or prefix each command with sudo . The main command for working with software RAID devices is mdadm . $ mdadm --create /dev/md0 --level=raid1 --raid-devices=2 / dev/sda3 /dev/sdb3 If you want to save on the typing, you could use: $ mdadm -C /dev/md0 -l 1 -n 2 /dev/sd{a,b}3 but we will stick to the long options here for clarity. You have now created a block device at /dev/md0 (software RAID devices are generally named /dev/mdN) that you can format and then mount like any other block device $ mkfs.ext4 /dev/md0 $ mount /dev/md0 /mnt/somewhere
When things go wrong
Hopefully, that’s all you need to do, your array has been created and is used as a normal disk by the OS, but what if you have a drive failure? You can see the status of your RAID arrays at any time with either of these commands: $ cat /proc/mdstat $ mdadm --detail /dev/md0 Let’s assume you have a failure on the second disk and have a replacement available. Remove the old drive from the array with: $ mdadm /dev/md0 --fail /dev/sdb3 --remove /dev/sdb3 Then turn off the computer, replace the drive and reboot. The arrays will still work but /proc/mdstat will show it as degraded, because a device is missing. Now run $ mdadm / dev/md0 --add /dev/sdb3 and look at /proc/mdstat again. It will show that the array is now back to two devices and that it is already syncing data to the new one. You can continue to use the computer, although there may be a drop in disk performance while the sync is running. If you have a drive that doesn’t even show up any more, as in the case of a complete failure, you cannot remove /dev/sdb3 because it no longer exists, use the word missing instead of the drive name and mdadm will remove any drive it can’t find. If you already have a spare drive in your computer, say at /dev/sdc, you can add it to the array as a spare with $ mdadm /dev/md0 --add-spare
RAID 6 Four or more disks with data and parity information distributed so that any two drives can fail with no loss of data. The total size is (N - 2) * S. RAID 10 A RAID 1 array of RAID 0 arrays requiring at least four drives. It can tolerate multiple failures as long as no RAID 1 section loses all its drives. The total space (N / 2) * S. RAID is normally administered with mdadm at the command line, but there is a RAID module for Webmin if you want a graphical option.
/dev/sdc3 . Should sdb fail and be removed as above, sdc3 will automatically be added to the array in its place and synchronised. All of these examples use RAID 1 but the processes, apart from initial array creation, are identical for all higher RAID levels.
Error monitoring
How do you know if a drive is failing, are you expected to keep looking at /proc/mdstat? Of course not, mdadm also has a mode to monitor your arrays and this is run as a startup service. All you need to do is configure it in /etc/mdadm. conf, find the line containing MAILADDR , set it to your email address and remove the # from the start of the line. Now set the mdadm service to start when you boot and it will monitor your RAID arrays and notify you of any problems. The config /etc/mdadm.conf is also used to determine which devices belong to which array. The default behaviour is to scan your disks at startup to identify the array components but you can specify them explicitly with an ARRAY line. You can generate this line with $ mdadm --examine --scan . This may be useful if you have one or more slow devices attached to your system that slow down the scan process. We have built RAID arrays from partitions in the above examples, but you can also create an array from whole disks, for example a three disk RAID 5 array like this: $ mdadm --create /dev/md0 --level=raid5 --raid-devices=3 / dev/sd{a,b,c} After creating an array like this, you can use gdisk or gparted to partition it as you would a physical disk, the partitions then appear as /dev/md0p1 and so on. Bear in mind that your BIOS will need your /boot directory to be on a filesystem it can read, so whole disk RAID may not be suitable for your OS disk. RAID also works well with LVM (covered last month). Create the RAID array and then use that as a physical volume for LVM. That way you get the flexibility of LVM with the data security of RAID. LXF
If you missed last issue Call 0844 848 2852 or +44 1604 251045 www.techradar.com/pro
January 2016 LXF206 77
Hard drives Divide your drive into partitions to protect your data
GParted: Set your partitions Nick Peers reveals how to divide up your hard drive using partitioning to protect your data and make Linux easier to work with.
network, or setting up independent partitions for temporary files, your swap file or even to run specific services like a web server. Whatever your needs, partitioning can play a crucial step in improving performance as well as protecting different parts of your drive from each other.
Partition types
Our expert Nick Peers
has been hooked on Linux – starting with Ubuntu and now Minibian – for 10 years. His preference for GUI tools is tempered by a growing love of the Terminal.
P Quick tip If you plan to share a partition with Windows then it needs to be set up as NTFS so your Windows install can read it. You’ll need to verify the ntfs-3g file system add-on is installed.
artitioning allows you to carve up a single physical hard drive into smaller, virtual drives known as partitions or volumes (although there is a difference, as we’ll explain later). Each volume acts independently of the others, providing a measure of redundancy for data that’s spread across the drive. If one partition develops problems then you can restore it without affecting what’s stored elsewhere. This redundancy provides the first of a number of reasons to partition your drive: it allows you to move your personal data (typically your home folder) to another volume, so it’s protected from any changes made to your system partition. You can then use your Linux installation as a sandbox, rolling back any unwanted changes without touching your data. Another popular use for partitions is to run multiple operating systems on a single PC. You could run two versions of Linux side-by-side or set up a dual-boot Windows/Linux machine for compatibility purposes, eg. This is possible because each partition can be formatted using a different file system, so ext3 or ext4 for Linux, or NTFS for Windows, eg. You can then create a data partition that’s visible to both OSes, ensuring you have the latest version of your documents and other files, whichever OS you happen to be running. It’s also possible to use partitions for other purposes, such as creating a dedicated partition for sharing over your
78 LXF206 January 2016
www.linuxformat.com
Hard drives are partitioned according to a scheme. There are two main types of scheme: the older Master Boot Record (MBR), which is limited to the first 2TB of a hard disk, and the newer GUID Partition Table (GPT). In both cases, partition information is recorded physically in the first sector of the drive in a partition table – GPT drives also store this information at the other end of the drive in the last sector for redundancy purposes. One of the limitations of the older MBR scheme is that it only supports a maximum of four partitions, although one of these can be an extended partition, which in turn can house multiple logical volumes, bypassing this limit. GPT doesn’t differentiate between primary and extended (logical) partitions and supports up to 128 partitions per drive. GPT is designed in conjunction with modern UEFI firmware, but in some cases is backwards compatible with older legacy BIOS systems too. It’s compulsory if your drive is over 2TB in size, but ultimately which partition scheme you use typically depends on what you already have. For simplicity’s sake it’s best to stick with what’s already there, and if you’re planning to wipe the drive completely clean and install Linux from scratch, let your distro decide which scheme is best based on your current setup and stick with it post install. In this tutorial we’ll cover partitioning using MBR, but the process is a similar one for GPT drives. We’re using Ubuntu
The Disks utility provides a handy summary of how your partitions are currently set up.
Partitioning Tutorial Working with LVM If you have set up your Linux install using Logical Volume Management then you’ll find yourself frustrated should you attempt to repartition it using GParted – it simply won’t work. That’s not a problem though, because all you need is the right tool. You can, of course, configure your partitions from the command line, but a better bet is to employ the user-friendly LVM utility. To do so, boot from your Live CD, then open Software Center and search for ‘LVM’. Select ‘Logical Volume Management’ and click More Info > Use This Source > Install. Once installed, launch LVM and you’ll find a more pleasant environment to work in.
Start by resizing your root partition to free up available space: expand Logical View, select ‘root’ and click ‘Edit Properties’. Use the controls to shrink the partition to its desired size and click ‘OK’, then wait while the partition is resized. You can then switch to Logical View and click ‘Create New Logical Volume’ to set up any additional partitions you wish to create, including giving them a friendly name (such as ‘home’ for your home partition). Don’t bother configuring a mount point at this stage – instead, follow steps four through to six of the walkthrough (see p81) once the partition is in place to finish configuring it.
The LVM utility (or as its package is called system-config-lvm) is the easiest partitioning tool to use on LVM-enabled setups.
partition described as Linux (Bootable) and a much larger extended partition inside which is the Linux LVM volume. The primary partition exists because extended partitions aren’t bootable, so the required boot files are placed on this small partition with the rest of your system left on the extended volume. (See Working with LVM box, above).
Partition structure
Remember to take a full drive image of your hard disk before you begin partitioning it. Try Clonezilla.
14.04.3 LTS, but you should be able to translate this to most Linux distros. How your computer is currently partitioned depends on your individual circumstances, of course, as well as the distro you use. To see how your drive has been partitioned, as well as see what type of partition scheme (MBR or GPT) has been implemented, open the Disks utility from the Dash. Select your drive from the left-hand pane and check the ‘Partitioning’ entry to see what scheme it’s been assigned. Beneath this is the Volumes chart, which provides a graphical representation of your disk, revealing what partitions have been set up. The default Ubuntu 14.04 setup, eg, sees your drive partitioned into two: a main Linux partition that’s bootable, with all your applications, data and settings in addition to your Linux distro. There’s also a much smaller extended partition, inside which is the swap file volume. Each partition on your hard drive is allocated a separate entry inside the /dev folder, which is where references to all the components that make up your PC are stored. Each physical hard drive is represented by three letters: hda, hdb and hdc etc for those drives attached to older IDE controllers, and sda, sdb and sdc etc for those attached to SCSI and newer SATA controllers. Each physical drive’s partitions are represented in numerical terms eg sda1 or sdb5. Look under Device to see which entry has been assigned to the currently selected volume in Disks. One complication: you may have been provided with an option to use LVM (Logical Volume Management) during install – if this is the case, then on traditional MBR-based setups, you’ll see a small primary
Partitions are accessed through the file system by mounting them at a specific level. Linux structures its files, folders and partitions as a tree. When you start Linux, your primary boot partition is mounted first at / at the root (or trunk) of the tree. Other partitions can then be mounted at specific folder points above it in the tree – the /mnt and /media folders are a good choice for a partition that’s specifically been created for sharing, eg. It’s also where external partitions – eg from another operating system – are automatically mounted if the partition’s file system is recognised by Linux. It’s also possible to mount partitions directly to key directories like /tmp (temporary files) or /home (the home directories of each user on your PC) when applicable – this provides a seamless and consistent experience regardless of whether you decide to set up dedicated partitions for key folders or not. Mount points are stored in the /etc/fstab file. The best time to partition your drive is when you first set up Linux on an empty drive. (Check out the Partition from Scratch box, p80, for details on how easy this is.) You’ll also encounter partitioning when setting up a dualboot system. In this event, things are made much easier if your existing OS is detected during the installation process – if it is the hard work of partitioning is done for you because the installer suggests a suitable partition layout, which you can tweak to your personal needs based on how much free space is currently available on the drive. That’s not a practical route for most people to take – and thankfully there are third-party tools that can repartition your drive without data loss, although it’s important to note there’s always an element of risk involved. That’s why it pays to take a full drive image of your current setup now, so if things go wrong you can reset and start again. Use a tool like Clonezilla (http://clonezilla.org) or create a snapshot if you’re running your installation in a virtual machine. It also pays to sit down and work out what your requirements are and how much space you have to play with. In the first instance, decide what additional partitions you wish to create and what you’re
Quick tip Another advantage of LVM over regular partitioning is the fact it allows you to add additional physical disks to your computer, then map the additional space on to existing volumes. This allows you to boost capacity without having to move or copy data between drives.
Get the best mag deals Subscribe now at http://bit.ly/LinuxFormat www.techradar.com/pro
January 2016 LXF206 79
Tutorial Partitioning
Quick tip It’s possible to add a new partition to fstab without editing it directly – open the Disks utility, select your partition and choose More Actions > Edit Mount Options… to do so. You can’t, however, add the ‘pass’ field using this method, preventing the partition being checked for errors at startup.
As long as Ubuntu recognises the OS that you’re trying to install it alongside, the partitioning process is easy.
creating them for. If it’s to protect them against a reinstall or possible corruption on the main drive, then they can be housed on the same partition; remember too that while this provides some protection against data loss, it’s not a substitute for backing up. If you’re looking to boost performance by moving key folders – your swap file, perhaps, or the usr folder where programs are stored – then these will need to be housed on a separate physical drive to the one Linux is running on. Other considerations include the size of each partition – this depends on its type, and how much spare space you have. Start by right-clicking the folder in question in your file manager and selecting Properties. This is technically the minimum amount of space you’ll need to allocate the partition, but think what else might need to be stored in the folder and add on the extra space accordingly. Note: you’ll be taking space from your primary partition for the new volume, which may limit how much space you can assign to it. Other considerations: should the partition be primary or extended? If you only plan to create a single additional partition then simply resize your system partition and create a new primary partition in its place (as outlined in the step-bystep guide, p81). If, however, you plan to create a number of partitions on your main drive, then using the extended partition is advised (see Partition from Scratch, below). You’ll also have to decide what file system to assign the new partition. If you’re creating dedicated partitions for folders like home or var, then you’ll want to make these the same as your primary Linux partition (typically ext3 or ext4). The same is true for partitions you plan to use as shared folders over your network. Indeed, the only time you’ll want to change the file type is if you’re creating a partition that you plan to share with a Windows installation on the same PC, in which case choose FAT32 or NTFS.
When it comes to actually partitioning your drive, then the best tool to use already lives on your Ubuntu Live CD and it’s GParted, a user-friendly graphical front end that makes resizing partitions simple (while also providing a handy graphical overview of the physical layout of your hard drive). You’ll need to boot from this to resize your system partition anyway, so you might as well perform all your partitioning needs while outside your main Linux install.
Partitioning tools
You can also run GParted from its own standalone bootable install on CD or USB from www.gparted.org/livecd.php. Make sure you pick the right architecture, which is typically i586 for older, 32-bit computers, and amd64 for newer 64-bit machines with UEFI rather than legacy BIOS. Users of 32-bit can also experiment with the i686-PAE build if they find the i586 build is a bit sluggish. The step-by-step guide (see right) reveals how to shrink your main partition before creating a second primary partition alongside it using GParted from the Ubuntu live CD. If you’d rather create your new partition or partitions inside the extended partition, then the procedure is slightly different. First, resize the system partition as outlined in the walkthrough. Once done, you need to extend the small extended partition that contains your swap file to take up all the remaining free space. This can’t be done while the swap file is in use, so rightclick the swap partition in GParted and choose ‘Swap off’. Once done, you’ll be able to right-click the extended partition and choose Resize/Move. Type 0 (zero) into the ‘Free space preceding’ box and click ‘Resize/Move’ to quickly allocate all available space to the partition. You can now partition the free space as you wish without worrying about running out of available partitions – just make sure you calculate how much space each partition is likely to need and assign it accordingly, and don’t forget to click the ‘Apply’ button when you’re done. The tools we’ve mentioned are pretty smart on the whole, but potentially dangerous task, which is why taking a full system backup before you begin is essential. If you do run into problems, check out the GParted help pages at http:// gparted.org/help.php. You’ll find handy links to an FAQ with, among other things, a guide to fixing problems with Grub. You may run into problems when attempting to mount new partitions into key folders such as your home folder. If Linux throws up an error on startup, press m to manually recover from it, then type sudo nano /etc/fstab to examine the file and check there are no errors preventing the partition from mounting correctly. If you can’t find any problems, delete the line, save the file and restart before investigating.
Partition from scratch The best time to partition is when you’re setting up a fresh install. In Ubuntu, when prompted to erase the disk, select ‘Something else’ and click ‘Continue’. You’ll see a single device – /dev/sda – is set up. Click the ‘New Partition Table…’ button, read the warning and click ‘Continue’. Now select the free space and click the ‘+’ button next to Change to create a partition. Let’s begin with the main system partition. This can be as little as 15-20GB space, but make
it larger if necessary. Once you’ve calculated how much you need in gigabytes, multiply it by 1,024 and enter the figure in the Size box. Leave ‘Primary’, ‘Beginning of this space’ and ‘Ext4 journaling file system’ selected, choose / under Mount point and click ‘OK’. Select the remaining free space and click ‘+’ again. Click the ‘Use as’ drop-down menu and select ‘swap area’. Set the partition type as Logical, but set its location to ‘End of this space’.
What size (in MB) it should be depends – a rule of thumb is to make it the same size as the RAM installed in your PC (make it double the size if you have 1GB or less). Click ‘OK’. Finally, select the remaining free space and click ‘+’ again. Leave the size, type, location and ‘Use as’ settings as they are, and finally set the ‘Mount point’ to /Home. Click ‘Install Now’ followed by ‘Continue’ to set up your partitions and install Ubuntu.
Never miss another issue Subscribe to the #1 source for Linux on page 32. 80 LXF206 January 2016
www.linuxformat.com
Partitioning Tutorial Create a dedicated home partition with GParted
1
Launch GParted
2
Reboot from your Ubuntu Live CD or USB flash drive, opting to ‘Try Ubuntu’ when prompted. Once at the desktop, launch GParted by typing its name into the Dash. If you have more than one physical disk attached to your PC, make sure the correct one has been selected (typically /dev/sda) by clicking the button in the top right-hand corner of the GParted window.
3
Create home partition
4
Next, right-click the unallocated space and choose ‘New’. Assuming this is the only additional partition you wish to create, leave the default settings as they are and click ‘Add’. Nothing has yet been done to your drive, so review your changes and if you’re happy click the green tick button followed by ‘Apply’. Wait while GParted first resizes your main partition and then creates the new one.
5
Update fstab
Copy files to new partition
Reboot into Ubuntu proper. Open Files, and click the newly visible volume that appears under Devices. Select Go > Enter Location, then select the location in the Address bar, right-click it and choose ‘Copy’. Open Terminal, type sudo cp -Rp /home/* before right-clicking and choosing ‘Paste’ to enter the location of your new partition. Hit Enter and ignore any errors.
6
Once done, you should see the partition is now populated with the files from your home folder. Next, type sudo blkid and press Enter. Make a note of the UUID, then type sudo nano /etc/fstab and hit Enter. Type the following beneath the first line marked UUID=, replacing
with your partition’s UUID: UUID= / home ext4 nodev,nosuid 0 2 .
Resize system partition
Right-click the main partition (/dev/sfa1) graphic and choose ‘Resize/Move’. Click and drag on the right-hand edge of the bar to reduce the partition’s size roughly to where you want it to fall based on how much space you plan to allocate your home partition. If necessary, fine-tune the size using the ‘Free space following (MiB)’ to set an exact amount. Click ‘Resize/Move’.
Mount new partition
Once you’ve made your changes to the fstab file, go ahead and save the updated file and close it, then type the following series of commands into the Terminal and press Enter: cd / && sudo mv /home /home_old && sudo mkdir /home You will need to reboot your computer and if you open Disks to verify that your new partition is correctly mounted at /home. LXF
www.techradar.com/pro
January 2016 LXF206 81
Encryption Create and place an encrypted storage container on a drive
ZuluCrypt: Encrypt drives Insulate your data another way, Mayank Sharma show you how.
W
Our expert Mayank Sharma is a
very private person. He encrypts his cafe latte order every morning to his local barista. He may live in New Delhi – who knows.
hile you can control access to the data on your computer using user accounts and file permissions, they aren’t enough to prevent a determined intruder from gaining access to your private files. The only reliable way to keep your personal data to yourself is to use encryption. Sure, working with encrypted data is an involved process, but it’ll go a long way in reinforcing your security and insulating your data. ZuluCrypt is a graphical encryption application that has an intuitive and easy-to-follow interface. Using the application you can create an encrypted disk within a file, a partition and even USB disks. It can also be used to encrypt individual files with GPG. To install ZuluCrypt head to http://mhogomchungu. github.io/zuluCrypt and scroll down the page to the binary packages section. The applcation is available as installable Deb package files for Debian and Ubuntu. Download the package for your distro and extract it with tar xf zuluCrypt*. tar.xz . Inside the extracted folder, switch to the folder corresponding to your architecture (i386 for older 32-bit machines and amd64 for new 64-bit ones). Both folders contain four binary packages that you can install in one go with the sudo dpkg -i *deb command. On other distros you will have to install ZuluCrypt manually. Download the application’s tarball and follow the detailed steps in the included build-instructions file to fetch the dependencies from your distro’s repos. One of the first things you should do after installing is to create encrypted versions of all files that you consider sensitive. Fire up the application and head to zC > Encrypt A File. In the dialog box that comes up press on the button adjacent to the Source field and navigate to the file you wish to encrypt. ZuluCrypt will use this information to create a file with the same name and append the zC extension at the end – or save it elsewhere by clicking on the folder icon adjacent to the Destination field and navigating to a new location.
ZuluCrypt also supports cascade encryption which is the process of encrypting an already encrypted message, either using the same or a different algorithm.
82 LXF206 January 2016
www.linuxformat.com
Next enter the password for encrypting the file in the key field. Make sure the password is a mix of characters and numbers to make it difficult to guess. Also remember that there’s no means of recovering the password if you ever forget it, and no possibility of decrypting the file – that’s sort of the point! Once you’ve confirmed the password press the ‘Create’ button to encrypt the file. This process might take some time depending on the type and size of the file you are encrypting. Once it’s done you’ll have the encrypted version with the .zC extension in the destination location you specified earlier. Once a file has been encrypted, make sure you delete its original version. You’ll now have to decrypt the file before you can read and make changes. For this, launch ZuluCrypt and head to zC > Decrypt A File. Point to the encrypted file in the Source field and alter the location of the unlocked file in the Destination field. Now enter the password with which you encrypted the file and click the ‘Create’ button. When it’s done, the decrypted file will be created in the specified destination. To lock the file again, encrypt it by following the previously outlined procedure.
Encrypted data silos
Individually encrypting files works adequately if you only need to protect a couple of files. Generally, it’s a cumbersome process and is only suitable for files you don’t need to read or modify regularly. If you need to protect a number of files that you access frequently, a better approach is to file them inside encrypted storage areas. ZuluCrypt can perform block device encryption, which means that it can encrypt everything written to a certain block device. The block device can be a whole disk, a partition or even a file mounted as a loopback device. With block device encryption, the user creates the file system on the block device, and the encryption layer transparently encrypts the data before writing it to the actual lower block device. While encrypted the storage areas just appears like a large blob of random data and doesn’t even reveal its directory structure. To create an encrypted storage device within a file, fire up ZuluCrypt and head to Create > Encrypted Container In A File. In the window that pops up you’ll have to enter the name and complete path of the directory under which you’ll house your sensitive data. It’s called a file, because it’ll appear as a singular file when it’s encrypted. You’ll also have to specify the size of the directory depending on the size of the files it’ll house and the space available of your disk. When you press the ‘Create’ button, ZuluCrypt pops up another window. First up you’ll have to specify a password for encrypting the file. Next, you’ll have to select a Volume Type.
ZuluCrypt Tutorial
You can’t unlock a volume without a header. In case the original gets corrupted, create a backup by right-clicking on a mounted volume and selecting the appropriate option.
The default option is LUKS, or Linux Unified Key Setup, which is a disk-encryption specification designed specifically for Linux. In addition to LUKS, ZuluCrypt can also create and open TrueCrypt, VeraCrypt and Plain volumes. Plain volumes are headerless encrypted volumes and the encryption information is provided by ZuluCrypt. Because of this, Plain volumes are application-dependant and not very portable. TrueCrypt or VeraCrypt volumes are better alternatives if the encrypted volume is to be shared between Linux, Windows and OS X computers. Once you’ve decided on the type of Volume, you’ll have to pick a cipher, an algorithm that does the actual encryption and decryption. An associated attribute of the cipher is the associated size of the key. As the key size increases, so does the complexity of exhaustive search to the point where it becomes impracticable to crack the encryption directly. The most popular encryption cipher is the Advanced Encryption Standard (AES) which is based on the Rijndael cipher. AES with a key size of 256 bits is widely used as it offers the right balance of speed and security. This is the default cipher in ZuluCrypt. However, the application supports a large number ciphers including the Twofish algorithm and Serpent. These two are considered by the US National Institute of Standards and Technology to have a higher security tolerance than AES, but are also slower. You can safely select the default values for each field, including the default filesystem for the volume (ext4) and press the ‘Create’ button. When the process completes, you’ll notice a file with the name you specified for the encrypted container with illegible content and size equivalent to what you specified earlier. Before you can store files inside this encrypted volume you’ll first have to decrypt and mount it. Head to Open > PLAIN,LUKS,TrueCrypt Container In A File. Use the file button in the pop up window to navigate to the encrypted container file that you’ve just created. If you wish you can alter the mount name for the file, else just enter the password and press ‘Open’. Toggle the checkbox if you only want to read the contents of encrypted volume. Once your volume is mounted it’ll appear in your file system like any other mounted file system. The main ZuluCrypt window will also list the volume along with the complete mount path. You can now create directories within this mounted location and create files just like you would on any regular mounted device. When you’re done, right-click on the mounted volume in the ZuluCrypt interface and select the
‘Close’ option. This will unmount and encrypt the volume and all you’ll have, once again, is the single encrypted file with illegible content. Mount the file again following the procedure mentioned above to reveal its contents. If you have issues managing multiple passwords, ZuluCrypt gives you the option to create random keyfiles which you can then use to encrypt files and volumes. To generate a keyfile head to Create > Keyfile. Now enter the name for the keyfile and its storage path. From a security point of view, you should make sure the keyfiles are not stored on the same hard disk as the files or volumes it encrypts. In fact, it’s best to keep these on an external drive which ensures that your encrypted data remains secure even if someone grabs hold of your drive containing the encrypted files and volumes. To use a keyfile instead of a password, select the keyfile option using the drop-down menu when creating an encrypted volume or encrypting a file. After selecting this option you’ll also have to point the application to the keyfile, which will then be used to lock your data.
Scramble partitions and disks
If you want to encrypt large amounts of data, it’s best to place the encrypted container inside a partition of its own or even on a removable USB drive. Note that when you create such a container ZuluCrypt takes over the entire partition or disk, so make sure you’ve backed up any existing data. Also, make sure that the destination partition or drive isn’t mounted. Use the mount command to list all mounted partitions. If the partition you wish to use, say /dev/sdb1, is mounted, you’ll first have to unmount it with sudo umount / dev/sdb1 . Now launch ZuluCrypt and head to Create > Encrypted Container In A Hard Drive. In the window that pops up, ZuluCrypt will list all the available partitions that it can use to house the encrypted volume. Note that the devices are listed both by device name and by the associated UUID. If you are creating a container on a removable disk, make sure you toggle the Use UUID option. This will ensure that ZuluCrypt always correctly identifies the device. Now double-click on the drive/partition you wish to create the volume on. You can now create an encrypted volume on the drive using the same exact procedure we used earlier to create an encrypted volume inside a file. Although at first it might sound cumbersome to use, over time ZuluCrypt will grow on you as you get familiar with the application. There’s no easier way for the privacy conscious to keep their data secure. LXF
ZuluCrypt includes the ZuluMount tool that can mount all encrypted volumes supported by ZuluCrypt and also doubles up as a general-purpose mounting tool.
We’re #1 for Linux! Subscribe and save at http://bit.ly/LinuxFormat www.techradar.com/pro
January 2016 LXF206 83
Perl 6
Perl 6: Discover its new features Mihalis Tsoukalos explains the necessary things that you need to know to start taking advantage of the unique features of Perl 6.
Our expert Mihalis Tsoukalos
is a Unix admin, a programmer, a DBA and a mathematician who enjoys writing articles and learning new things.
The installation process for the Perl 6 compiler on an Ubuntu system. Your Linux distro will probably have a similar package that you can install.
P
Quick tip Perl 6 is here to stay, so the sooner you learn its new features the better it will be for you. Additionally, Perl 6 will definitely make you a better and more productive programmer so there’s no doubt that you should stick with it!
erl 6 is the latest version and it supports objectoriented programming, including generics, roles and multiple dispatch, as well as functional programming primitives, including list evaluation, junctions, autothreading and hyperoperators. One major new feature is support for multi-cores, and it also supports definable grammars which increases the pattern matching capabilities of Perl, and enables developers to perform generalised string processing. We’ll be using Rakudo, a compiler for Perl 6 code. You can install Perl 6 on an Ubuntu distro with sudo apt-get install rakudo . (The full install process is pictured, above.) Despite the name of the package, the compiler‘s executable is called perl6. Use $ perl6 -v to see the exact version, which will output something like This is perl6 version 2013.12 built on parrot 5.9.0 revision 0 . You can execute a Perl 6 file with $ perl6 file.pl . Alternatively, you can create a script with: $ cat hw.pl #!/usr/bin/env perl6 use v6; print “Hello World!\n”; If you execute perl6 without any additional arguments or options, you’ll enter a REPL (read–eval–print loop) which is a
84 LXF206 January 2016
www.linuxformat.com
new feature. A REPL is also a shell; a simplistic and interactive programming environment that accepts single user inputs, evaluates them and immediately returns the results to the user. It’s also handy for learning the new features of Perl 6.
New changes
If your main program file contains a subroutine called MAIN this will be automatically executed first when the program is launched. This can be helpful for getting command-line arguments and options as it gives you a CLI parser for free. The code below (readWords.pl) demonstrates this: use v6; my $count = 0; sub MAIN($file) { print “File: $file\n”; for $file.IO.words -> $word { $count++; } print("Total number of words in $file is $count\n"); } This code also shows a new way of reading words from a file. As you can see, you no longer need to open the text file
Perl 6 for reading, read it line-by-line and close it. The MAIN subroutine expects and requires one command line parameter – if you give two command-line arguments or more it automatically generates the following error message: $ perl6 readWords.pl readWords.pl readWords.pl Usage: readWords.pl Additionally, you can customise the error message by defining a subroutine called USAGE , which will be automatically called in case there’s a wrong number of command-line arguments according to the signature of the MAIN subroutine. Please note that you can still access a file using open() and close(). If you want to go even further, you can declare MAIN as multi which allows the declaration of various alternative syntaxes. The following illustrates this: use v6; multi MAIN() { print “No command line argument given.\n”; } multi MAIN($x) { print “One command line argument given.\n”; } multi MAIN($x, $y) { print “Two command line arguments given.\n”; } multi MAIN($x, $y, $z) { print “Three command line arguments given.\n”; } sub USAGE { print “Too many command line arguments given!\n”; } All arguments are read as strings; you should convert them later into another format. Also, note that a custom error message will be used as defined in the USAGE subroutine. The next line of code demonstrates how you can read an entire file and put it into an array where each array element is a single line of the file: my @lines = “myFile”.IO.lines; You can easily find the maximum value of any data type that supports ordering with max built-in function: say max -10, -10, -15, -2, -12; say max ["a”,“2”,“aa”,“aaa"]; If you use the say command, you don’t need to put a newline character at the end of the command. However, if you use ‘print’ then you should put a newline character at the end of the string.
New control structures and loops
The first control structure to learn is the given-when construct which can elegantly replace a series of if-elsif-if statements. You can see it in action below (givenWhen.pl): my $continue = 1;
This is the output of the perl --help command that shows all the available command line options of the perl6 executable.
while ( $continue ) { # Read a value my $input = prompt “Choose between 0 (exit), 1 and 2: "; # Parse it given $input { when “0” { print("Exiting.\n"); $continue = 0; } when “1” { print("1 was given!\n"); } when “2” { print("2 was given!\n"); } default { print("Wrong choice. Please try again!\n"); } } } Another interesting change is in the for loop, as it’s no longer called for but loop :: loop (my $i = -5; $i <= 5; $i++) { print $i~” "; } As you see from the code, above, string concatenation is now using the tilde ( ~ ) instead of a dot. If you try to use the famous for loop in Perl 6 you’ll get this error message: ===SORRY!=== Unsupported use of C-style “for (;;)” loop; in Perl 6 please use “loop (;;)” As you already know, the for loop is now an iterator that allows you to access all elements of an array or a list.
Quick tip What’s important is whether Perl 6 is better than Perl 5 or not. Although it is too soon to tell, Perl 6 looks like a much better and improved version. Perl 5 can do many of the same things but Perl 6 can do them more elegantly and with cleaner code.
About strings It’s now easier and simpler to convert a proper string to its numerical value. As both numbers and strings are objects, the conversion is done using a built-in object method. The base() method takes two arguments: the first is the base of the number and the second, which is optional, defines the number of digits that will be used when dealing with fractions. If the second parameter is omitted, then a default value is chosen which is 0 for
integers and bigger for other types of numbers. The .chr method turns an integer into a single unicode character. Objects of type Str, a built-in class, are immutable. You can define an immutable string as follows: > my Str $str := "123"; 123 > $str = “1234”; # Cannot be changed! Cannot assign to an immutable value
Please note that the use of the := operator. Perl 6 supports binding with the := operator which means that $str directly points to the Str "123" and therefore you cannot change it any more. As you can understand the := operator also works with other types of variables: > my Int $anInt := 123; 123 > $anInt = 32; Cannot assign to an immutable value.
Get the best mag deals Subscribe now at http://bit.ly/LinuxFormat www.techradar.com/pro
January 2016 LXF206 85
Perl 6
Quick tip You can find more information about Perl 6 at http:// perl6.org but nothing can replace practice. If you don’t know where to start, begin by implementing simpler versions of existing Unix utilities in Perl 6.
However, there is a tricky issue as explained below: my @values = ["1”, “2”, “3”, “4”, “5”, “6"]; for @values <-> $value { $value = $value~” euros”; print $value~” "; } for @values -> $value { $value = $value~” euros”; print $value~” "; } You’ll notice that if you want to change the iteration variable in a for iteration you should use the <-> symbol. If you use the -> symbol, then the iteration variable will be read only. This change isn’t necessarily a bad thing as it can save you from many troubles! The error message you are going to get after running for.pl is the following: Cannot assign to a read only variable or a value in block at for.pl:7 Last, in Perl 6 the continue block is no longer supported and you should use a NEXT block within the body of the loop instead of the continue block. In Perl 5 you would write this as follows: next if $line =~ /match/ ; next if $line !~ /match/ ; $line =~ s/xyz/123/; But in Perl 6 you should write the following, respectively: next if $line ~~ /match/ ; next if $line !~~ /match/ ; $line ~~ s/xyz/123/; Alternatively, you can use the new .match and .subst methods in Perl 6 (which we haven’t presented here).
Regular expressions
Perl 6 has support for named regular expressions and grammars. The main benefit of this alternative approach is not functionality, which remains the same, but better readability and fewer bugs because complex regular expressions in Perl 5 used to be hard to read and difficult to understand. It’s now time for an example grammar (regExp. pl) that matches signed and unsigned integers as well as numbers with a decimal point: #!/usr/bin/env perl6 use v6; # Define the Grammar
my grammar checkInteger { rule TOP { } token sign { <[+-]> } token decimal { \d+ } regex integer { ? + } regex isNumber { ? + ".” ? } rule number { } } # Use the Grammar my $input = “123.3”; if checkInteger.parse($input) { say “$input is an integer”; } else { say “$input is not an integer!” } if checkInteger.parse($input, :rule ) { say “$input is a number!”; } As you can see, a grammar is now a group of rules. You first define it and then you use it. This might seem a little complex at first but in the long run it will help you write better and less buggy code. When you call .parse() the grammar will try to match the input string against a regex named TOP within the grammar. If no regex named TOP is found, then an error will be generated. As you can understand, TOP is considered the entry point to the grammar. Should you wish to use another entry point, you can do it with the following: checkInteger.parse($input, :rule ) What the previous command does is parse the input using a specific rule called number instead of TOP . Additionally, just like classes, grammars can inherit and override rules etc. They also allow you to execute other commands while you parse the input. Perl 6 grammars are so powerful that they can even parse an entire programming language, including Perl 6 itself!
Perl 6 differences
The Perl 6 REPL is a great place to experiment and try new things!
The strict mode is now on by default. The same applies to warnings that are displayed by default. The functions, which were altered by autodie to throw exceptions on error and throw exceptions by default unless you test the return value explicitly. Both use base and use parent have been replaced in Perl 6 by the is keyword in the class declaration as in the following example, below package aPackage; # Perl 5 use base qw(anotherName); # Perl 5 class aPackage is anotherName; # Perl 6 And constant variables are now declared as follows in Perl 6:
Never miss another issue Subscribe to the #1 source for Linux on page 32. 86 LXF206 January 2016
www.linuxformat.com
Perl 6 Lazy lists and ranges Lazy lists are a unique characteristic of Perl 6 that might look strange the first time you see it. But before explaining more about lazy infinite lists, let’s see some code that defines one that also does some calculations: my @fib = 0, 1, *+* ... *; say “Fibonacci number #5 is @fib[4]”; Lazy lists are like arrays with a few major differences. First, they don’t necessarily have a
predefined size; they can even be infinite. Second, they don’t calculate their values in advance but on a need-to-calculate basis. Finally, once they calculate a value, the value can be stored for fast lookup. It should be made clear that infinite lists are supported because of their laziness property. The opposite of infinite lists are eager lists which are like C arrays. Perl 6 supports both eager and infinite lists. However,
constant $VARIABLENAME = 0; In Perl 5 you would have declared the constant variable in a slightly different way: use constant VARIABLENAME => 0; Additionally, pi, e, i are built-in constants in Perl 6 and no longer need to be defined as constants. You can now define the type of values that a variable can store. This can be done by adding the type name to the declaration of the variable, as the next example shows: my Int $i = 3; my Numeric $a = 2.3; The Numeric roles define a number or an object that can act as a number and this includes integers ( Int ), rationals (Rat) and floating point numbers (Num). Trying to put the wrong type of value into a variable will raise an error (as you can see, right). You can find the type of a variable by using the .WHAT method. Similarly, you can check if something is of a specific type as follows: > $x = 123; > if $x.WHAT === Int {say “It is an Integer!”;} > say “321”.WHAT (Str) > if $x.isa(Int) {say “It is an Integer!”;} It is an Integer! > if !($x.isa(Str)) { say “It is not a String!”; } It is not a String! > Please note that when you compare the return value of that WHAT method you should use the === operator. Both WHAT() and isa() methods are very handy for checking whether you have the kind of object you want or how to process an object according to its type.
Compatibility with older Perl
By what you have seen so far, it should be obvious that existing Perl 5 code will need some changes in order to work with the Perl 6 compiler. As far as regular expressions are concerned, if you have a complex Perl 5 regular expression that you want to use without changes in Perl 6, you can use the P5 modifier as in the next example: # Perl 5 code next if $line =~ m/[abc]/ ; # Perl 6 code that uses the P5 modifier next if $line ~~ m:P5/[abc]/ ; # New Perl 6 code next if $line ~~ m/ <[abc]> / ; As you have already seen, the for loop is now only used to iterate over lists, so you will need to change your for loops to use them in Perl 6. Subroutines are also now defined using the sub keyword, and parameters in subroutines are readonly by default. The only way to change them is through using the following method:
as lazy lists are more memory efficient, Perl 6 tries to use lazy lists when possible. Ranges, which are lazy by default, are also supported by Perl 6. The following code defines a finite and an infinite range: # Finite List my @fList = 1..20000; # Infinite List my @iList = 1..Inf;
$ cat subs.pl #!/usr/bin/env perl6 use v6; sub changeMe($var is rw) { $var++; return $var; } sub cannotBeChanged( $var ) { $var = 2; } my $myVar = 12; $myVar = changeMe($myVar); say $myVar; cannotBeChanged($myVar); $ ./subs.pl 13 Cannot assign to a readonly variable or a value in sub cannotBeChanged at ./subs.pl:4 in block at ./subs.pl:8 Perl 6 subroutines also support slurpy parameters which are used when you don’t know in advance the exact number of parameters a subroutine will get: #!/usr/bin/env perl6 use v6; sub unknow($first, $second, *@remaining) { print “First = $first, Second = $second\n”; say “Remaining parameters: @remaining[]”; } unknow(1, 2, 3, 4.1, 5, 6, 7, “eight”, ["a”, “l”, “i”, “s”, “t"]); If you’re not sure about a command or a function, you can always try it in the REPL and see if it works. (See an example pictured on bottom, p86.) As you can also see from the last command given, a lazy list doesn’t return inside REPL and you should terminate it manually. Hopefully by now you’ll see that Perl 6 supports more programming paradigms and has helpful and more informative error messages and warnings than Perl 5. Perl 6 will be the dominant version very soon, so it’s worth learning more about the language and to start using it in your new projects is a no-brainer. We’d suggest, however, not to use it on the first big project that might come up, but begin with smaller ones first. LXF
www.techradar.com/pro
If you define the type of a value, you cannot change its value afterwards. This is a great technique for reducing silly software bugs.
January 2016 LXF206 87
Python
Python: Sunfish chess engine Jonni Bidwell analyses the innards of a small but perfectly formed chess engine that bests him with alarming regularity.
Our expert Jonni Bidwell
is rumoured to be a mechanical Turk, it would explain the rat-a-tat of gears as he produces words in exchange for bread and beer.
L
egend tells of one Sissa ibn Dahir who invented the game of Chess for an Indian king. So impressed was that king that he offered Sissa anything he desired as a reward. Being of a calculating bent, Sissa replied “Then I wish that one grain of wheat shall be put on the first square of the chessboard, two on the second, and that the number of grains shall be doubled until the last square is reached:
Unicode generously provides chess piece icons which enhance the experience of playing from the terminal.
88 LXF206 January 2016
www.linuxformat.com
whatever the quantity this might be, I desire to receive it”. The king soon realised that there was not enough wheat in the world to fulfil this demand, and once again was impressed. There are various endings to this story, in one Sissa is given a position within the king’s court, in another he is executed for being a smart arse. Hopefully this tutorial’s chess treatment will feature neither execution nor LXF towers being buried in mountains of wheat. Chess is a complicated game – all the pieces move differently depending on their circumstances, there are various extraordinary moves (eg en passent pawn capture, castling) and pawns get promoted if they make it all the way to the other side. As a result, a surfeit of pitfalls present themselves to the chess-programming dilettante, so rather than spending a whole tutorial falling into traps we’re going to borrow the code from Thomas Ahle’s Sunfish – a complete chess engine programmed in Python. There’s no shortage of chess engines: from the classic GNU Chess to the Kasparovbeating Deep Blue (1997) to the pack-leading Stockfish. Chess engines on their own generally do not come with their own GUI, their code being mostly devoted to the not inconsiderable problem of finding the best move for a given position. Some (including Sunfish) allow you to play via a text console, but most will talk to an external GUI, such as xboard, via a protocol such as the Universal Chess Interface (UCI) or WinBoard. Besides providing nice pictures of the action, this enables us to play against different engines from a single program. Furthermore, we can pit engine against engine and enjoy chess as a spectator sport.
The Sunfish engine
We’ll assume that you know how to play chess, but if you don’t you can practice by playing against Thomas’s Sunfish engine. You’ll find the code on the LXFDVD in the Tutorials/ Chess directory. Copy this directory to your home folder, and then run it with: $ cd ~/Chess $ python sunfish.py The program uses Unicode glyphs to display the pieces in the terminal, making it look a little more chess-like than GNU Chess. Check the box (see Installing Xbound and Interfacing with Sunfish) to see how to enjoy graphical play. Moves are inputted by specifying the starting and ending coordinates, so the aggressive opening which moves the king’s pawn to e4 would be inputted e2e4. Note that this is slightly longer than the more common algebraic notation (in which the previous move would be written e4 ), but makes it much easier for
Python
The Mechanical Turk and other chess-playing machines In 1770 Baron Wolfgang von Kempelen wowed the Viennese court with ‘The Turk', a clockwork automaton sat before a chessboard. Kempelen claimed that his invention would best any human chess player. Indeed, the Baron and the Turk travelled around Europe and wowed onlookers with the latter’s prodigious talent. The Turk was a hoax, and its talent actually belonged to the poor person hiding under the table. However, it inspired people to think more about chess playing machines, and in 1950 Shannon and Turing both published papers on the subject. By the 1960s computers were
playing reasonable chess: John McCarthy (dubbed the father of AI) and Alan Kotok at MIT developed a program that would best most beginners. This program, running on an IBM 7090, played a correspondence match via telegraph against an M-2 machine run by Alexander Kronrod’s team at ITEP in Moscow. This was the first machine versus machine match in history, and the Soviets won 3-1. Their program evolved into KAISSA, after the goddess of chess, which became the computer chess champion in 1974. By the early 80s the chess community began to speculate that sooner or
machines to understand what you mean. If you wish to castle then just specify that you want to move your king two places sideways, the machine knows the rules and will move the relevant rook as well, provided that castling is a legal move at that stage in the game. Depending on your skills you will win, lose or draw. In LXF203 we used PyGame to implement the ancient board game Gomoku. For this tutorial we’ll see a slightly different approach. Have a look at the sunfish.py code: the shebang directive in the first line specifies that sunfish.py should be run with the Pypy compiler, rather than the standard Python interpreter. Installation of Pypy is trivial and will improve Sunfish’s search-performance drastically, but for our purposes it will be fine to proceed without it. We import the print_function syntax for backwards compatibility with Python 2, as well as the needed parts of other modules. Then we initialise three global variables, which we needn’t worry about here.
Chairman of the board
Now we begin to describe our chessboard. Its starting state is stored as a 120-character string, initial , which may seem a little odd, especially if you remember how nice it was to store the GoMoku board as a two-dimensional list. Be that as it may, this representation turns out to be much more efficient. Before defining initial we specify what will be the indices of the corner squares using the standard layout, so A1 is the lower left corner and A8 the lower right etc. We divide the string into rows of 10 characters, remembering that the newline \n counts as a single character. The actual board starts on the third row, where we represent black’s major pieces with the standard lowercase abbreviations, which we’ll list below for completeness: p: Pawn r: Rook n: Knight b: Bishop q: Queen k: King We have characters padding the beginning (a space) and the end ( \n ) of each row so we know that moving a piece one square vertically will involve adding or subtracting 10 from its index in the string. Dually, moving one square along the horizontal axis will be done by adding or subtracting 1, and we know that if the resulting index ends with a 0 (ie is 0
later a computer would defeat a world champion. Indeed, in 1988 IBM’s Deep Thought shared first place at the US Open, though reigning world champion Garry Kasparov resoundingly defeated it the following year. In 1996 Deep Blue stunned the world by winning its first game against Kasparov, although the reigning world champion went on to win the match 4-2. The machine was upgraded and succeeded in beating Kasparov the following year, though not without controversy. Since then computers regularly beat their inferior meatbag competition, although their prowess is driven by algorithmic advances.
modulo 10) or 9 (ie is 9 modulo 10) then that position is not on the board. The vertical ranks 1-9 can also be read directly from the second digit of the index, and the horizontal rows can be translated linearly from the first. Empty spaces on the board are represented by periods ( . ) to avoid confusion with the empty squares represented by spaces. Using the numerology (above) we describe unit movements in the compass directions with appropriately named variables, and then define the possible movements of each piece in the dictionary directions . Note that we only define the movements for white’s material here (ie pawns go north), their opponents can be figured by a simple transposition. Note also that we describe all the possible directions they can move, even though this may not be permitted by the current position (eg pawns can only move diagonally when they are taking and can only move two squares on their first move. We don’t take account of major pieces moving two or more squares in a straight line ('sliding') here, rather dealing with that instead in the move generation loop. Next, we define a lengthy dictionary pst . In a sense this is the data bank of the engine, it assigns a value to each piece for a given position on the board, so, eg, knights ( N ) tend to be more useful towards the centre of the board, whereas the queen is valueable anywhere. The king’s values are
Quick tip Sunfish used to be limited by the lack of a quiescence search. This meant that moves at the depth limit were not analysed, which can lead to so-called horizon effects, in that the engine can’t see past blunders here. Thanks to a simple check, moves at this limit are analysed to ensure they result in quiescent positions.
This is how every chess game starts, but after just four moves we could be in one of nearly 320 million different positions.
Never miss another issue Head to http://bit.ly/LinuxFormat www.techradar.com/pro
January 2016 LXF206 89
Python disproportionately high so that the machine knows it can never be sacrificed. Now we move on to the chess logic section and subclass the namedtuple construction to describe a given chess position. Using this datatype enables us to have a tuple (a fixed-length list) with named keys rather than numerical indices. We store the current board arrangement together with the evaluation score for that position. Then we have four extra elements to take care of the exceptional moves – castling and en passant pawn capture. The gen_moves function iterates over each square on the board and every possible move for each piece on the board. The loop is commenced as follows: for i, p in enumerate(self.board): if not p.isupper(): continue for d in directions[p]: for j in count(i+d, d): q = self.board[j] if self.board[j].isspace(): break The enumerate() function (line 147) is a vital weapon in any Pythonista’s arsenal, as it generates index-value pairs for a given list (or string in our case), useful when we are interested in list items’ positions as well as their content. Because of the symmetry involved we only consider the moves of white’s pieces, so we bail out if the relevant piece p in the string is not uppercase (line 148). Fortuitously, the .isupper method also returns False for spaces and periods, so empty squares are efficiently thrown away early on in the proceedings. The rotate() function transposes the colours when it’s black’s turn. We look at all possible directions that the piece can move and then (line 150) extend these moves to account for those pieces allowed
This is from a game Paul Morphy (white) played against the Duke of Brunsick and Count Isouard in 1858. It’s a so-called zugzwang for black (to move) – most moves are detrimental.
to slide (ie Rooks, Bishops and Queens). Finally, we discard any move that takes us off the board. The next part of the function checks if castling is possible: if i == A1 and q == ‘K’ and self.wc[0]: yield (j, j-2) if i == H1 and q == ‘K’ and self.wc[1]: yield (j, j+2) if q.isupper(): break Castling rights for both rooks are stored as a booleans in the list wc , which is part of our Position object. If we are considering either of white’s corner squares and if white still has castling rights (so those squares are certainly occupied by rooks) then we yield the move which moves the king two spaces left or right. Our gen_moves() is what is an example of a generator function – it yields results which can be used in for or while loops. In our case, we generate a pair of indices – the pieces position and after the move. Finally, we break if the destination square is occupied by one of white’s pieces, since friendly captures aren’t allowed.
A pawn in the game
Next we consider matters peculiar to pawns: if p == ‘P’ and d in (N+W, N+E) and q == '.’ and j not in (self.ep, self.kp): break if p == ‘P’ and d in (N, 2*N) and q != '.‘: break if p == ‘P’ and d == 2*N and (i < A1+N or self. board[i+N] != '.‘): break First, pawns cannot move diagonally into an empty square, unless an en passant capture can take place. Next, they can only move forwards (one or two squares, we check if the latter is allowed in the next line). The i < A1 + N comparison will return true for any pawn that has moved beyond the second row. Two-square rights are likewise denied if there’s a piece in front of the pawn. The closing stanza of gen_moves() reads: yield (i, j) if p in ('P’, ‘N’, ‘K'): break if q.islower(): break Having got all the constraints, we can now pass on the move under consideration, it may yet turn out not to be valid (eg if it doesn’t alleviate a check situation) but it’s passed the first level of filtration. Pawns, knights and rooks aren’t allowed to slide, and those pieces that are have to stop doing so if they capture a piece (ie land on a lowercase entry in board ). We’ve already discussed the rotate() function. But impressively the board can be transposed (which results in the equivalent game with the colours switched and the board rotated 180 degrees) just by reversing the board string and switching cases. We must take care of the other parts of our
Installing Xboard and interfacing with Sunfish There are a number of good chess graphical user interfaces for Linux, we’re using Xboard as it’s fairly ubiquitous amongst common distro repos, but be sure to check out PyChess as well. Installation will just be a matter of: $ sudo apt-get install xboard Now fire up Xboard and select Engine >Load New 1st Engine. Enter Sunfish in the Nickname field, for the Engine Directory use /home/user/ Chess (replacing user with your username –
Xboard doesn’t seem to understand the ~ shorthand) and for the Engine Command use python /home/user/Chess/xboard.py . Leave all the other settings as they are and select ‘OK’. The default XBoard setup gives the human player white pieces and if all has gone well the window title should now read ‘Sunfish’. Now you have your formidable opponent. If you click one of your chess pieces then Xboard generously shows you where the piece
can move to, which is not just useful for players who are starting out. GNU Chess will be equally as trivial to install, and Xboard already includes an Engine List entry for it. You can load GNU Chess as the second engine, and then select Two Machines from the Mode menu. Battle ought immediately to commence, with GNU Chess even showing you some of its crazy thought process in the status bar.
Never miss another issue Subscribe to the #1 source for Linux on page 32. 90 LXF206 January 2016
www.linuxformat.com
Python object though. Specifically, we need the negative of the position’s score, since we are still in opposition to the previous player, even though we’re pretending to have adopted their colour. Castling rights are already separated, so they need no further treatment. The en passant positions are easily figured out by counting backwards from the end of the board. The function move() deals with actually moving the pieces, when that time comes. We first get our beginning and end positions i and j and the occupants of those squares p and q . Line 178 defines a shorthand (lambda) function which replaces the piece at position i with the piece we are moving. We get and reset required class variables, to save us from typing self many times, and update the score by calling the valuation function. In line 184 we place the piece in its new position using our lambda function and then remove the piece from its original position with a further call. Beginning at line 187, we update the castling rights: if a rook is moved then castling on that side is no longer allowed, the value for the other, stationary, rook is preserved. Castling itself is instigated by the king: if p == ‘K': wc = (False, False) if abs(j-i) == 2: kp = (i+j)//2 board = put(board, A1 if j < i else H1, '.‘) board = put(board, kp, ‘R') Once he has been moved castling rights are cancelled, regardless of whether the player intends to castle or not. If they do then they are moving the king two places sideways, with the rook on that side ending up on the square horizontally adjacent to him. This is calculated by rounding down the midpoint of positions i and j . We use some more shorthand to delete the rook’s old position and the final line puts it in its new one.
Manipulating the pawns
Next we deal with pawns. Sunfish doesn’t do minor promotion, ie pawns are only promoted to queens if they reach row 8 (line 201). If a pawn moves two squares then we keep track of the square behind, in case an en passant capture is possible. If the pawn makes an en passant capture then the appropriate square is obliterated. We return a new Position object, remembering to rotate it to account for the next player’s point of view. The valuation function value() calculates the relative value of a given move. We start by calculating the difference between the value piece’s positions before and after the move. If the player has captured a piece then that piece’s value at its capture position is added. If castling results in the king being checked, then the value will go sky high (precluding that move). If castling did take place, then the value needs to be adjusted according to the rook’s new position. Finally, we account for pawn promotion and en passant capture. We’ll give an overview of the search logic section at the end, but it’s worth having a brief look at the user interface section (which starts at line 338). The parse() function converts from a two digit co-ordinate string (such as a4 ) to the relevant list index (61 in this case). The render() function does the opposite. The print_pos() function nicely prints the board, complete with Unicode characters in lieu of actual graphics and labelled axes for the ranks and files. The final function main() sets up the initial layout and defines the main game loop. Each iteration starts by displaying the board and asking for a move. We use the
Deep Blue v Kasparov (1997 - Round 2). Kasparov resigned after the machine shocked him with this move, throwing his performance for the rest of the match. It turns out he could have forced a draw from here, d’oh.
regular expression ‘([a-h][1-8])'*2 (line 369) to check that the move is of the correct form, ie a pair of co-ordinates. If it is, and that move is legal for the current position (it’s generated by pos.gen_moves() ) then we proceed, otherwise we ask again. Then we reverse the board for the computers turn and use the engine’s search function to find the best move. If this move results in a checkmate, or fails to resolve a checkmate then the game is done, otherwise the move is printed and the board updated. The code we’ve discussed so far can easily be adapted to a two-player game like we saw in the Gomoku tutorial. However, what is much more interesting is the code that figures out the machine’s next move. Amazingly, the engine itself (ignoring the lengthy pst dictionary and all the code we’ve already covered) occupies less than a hundred lines. Sunfish is based on the MTD(f) minimax search algorithm introduced in 1994, adapted to use binary trees. MTD uses so-called alpha-beta pruning for evaluating the game tree, so we build up a tree of possible moves and discard any which we can show lead to positions that are provably worse off than others we have evaluated. A technique called iterative deepening is used to temper the depth of the search, so that we don’t go too far down one particular rabbit hole. Calculating a move begins with a call to the search() function. We limit both the depth (line 305) and the breadth (initially using the NODES_SEARCHED variable) of the search to stop things getting out of control. The real magic happens in the bound() function. The move tree is stored in the ordered dictionary tp , which is indexed by our position string pos and so previously calculated positions can be looked up efficiently. When we come to analysing all possible moves (line 270), we sort the generated positions in reverse order by their value, which ensures copacetic positions get the attention they deserve. We use bound() recursively to construct a game tree from each possible move, adding appropriate moves to tp (line 293). Alas, the end of the page approaches so here is where we sign off. You’ll find some helpful comments to aid your understanding of the engine code, so why not experiment by tweaking parameters and seeing what breaks? If you want to learn more about the intricacies of programming chess, make sure and check out the Chess Programming wiki (https://chessprogramming.wikispaces.com), it will prove a valuable resource. LXF
www.techradar.com/pro
January 2016 LXF206 91
Got a question about open source? Whatever your level, email it to [email protected] for a solution.
This month we answer questions on: 1
Blank boot screen issues 2 Installing to RAID 3 OpenVPN 4 Multiple monitors
1
5
Sending to root files 6 Virus risks using Linux + How to boot Windows 10 from Grub
Blank screen of nothing
I’m having trouble with Ubuntu 15.04 on LXF198. It boots to a blank screen on my Dell Dimension E521 with Nvidia adaptor. I don’t see a safe boot option. Is there a work around? If I can get the DVD to boot, how can I make sure the same doesn’t happen after I install Ubuntu? Wandersman57 A blank screen during boot is often caused by kernel modesetting not getting on with certain combinations of Nvidia (and Radeon) graphics cards and
driver combinations. Kernel modesetting is where the kernel determines the best resolution to use for the console, as X does for the graphical desktop. Occasionally it gets it wrong and picks a mode that the monitor can’t handle. To avoid this, when you get to the boot menu option ‘Try Ubuntu…’, press e to edit the options. Remove “quiet splash” from the kernel options If you don’t want to delve into GRUB’s configuration file, you can and substitute them with install Grub Customizer. “nomodeset” , before pressing Ctrl+x or F10 to continue the boot. That turns off modesetting up the Ubuntu boot menu; it’s usually hidden and boots with a standard 640x480 text from sight. Then you can perform the same console. Note: This only affects the console trick to add nomodeset to the kernel options. used for booting; it has no bearing on the Once you’ve booted it’s fairly simple to make resolution of the graphical desktop. this the default. You need to edit the file /etc/ If the same problem occurs after booting, hold default/grub as root and change the options down the Shift key when you reboot to bring in the line marked GRUB_CMDLINE_LINUX_ DEFAULT . Then you need to run $ sudo update-grub to rebuild the boot menu with the new options. Doing it this way, rather than editing the menu file directly, means that your changes will be applied to any new menu entries that may be created for kernel updates. way more copies than any You can leave “quiet splash” in there but the other Linux mag in the UK. As we like splash screen won’t be too exciting at giving things to our readers, each issue the 640x480. You can change it by setting GFX_ Star Question will win a copy of one of our MODE in /etc/default/grub to what you want amazing Guru Guides or Made Simple to use and remove the # at the start of the line books – discover the full range at: to enable it. You can only use modes that your http://bit.ly/LXFspecials. display and BIOS support. After changing this, For a chance to win, email a question to run update-grub again. [email protected], or post it at There is a graphical program for changing www.linuxformat.com/forums to seek help from our very lively community. Grub settings and options, called grubcustomizer. However, it’s not in the standard See page 94 for our star question. Ubuntu repositories (repos) and by the time you have added a PPA and installed it, you could have edited the config file several times over. If you want to try it out you can get it from https://launchpad.net/grub-customizer.
Enter our competition Linux Format is proud to produce the biggest and Get into Linux today! best magazine that we can. A rough word count of LXF193 showed it had 55,242 words. That’s a few thousand more than Animal Farm and Kafka’s The Metamorphosis combined, but with way more Linux, coding and free software (but hopefully less bugs). That’s as much as the competition, and as for the best, well… that’s a subjective claim, but we do sell
Win!
2
Removable RAID?
I have been trying to make a new installation of OpenSUSE 13.2 on an IBM X3400 server using the first drive which is a RAID 1 array. For reasons I don’t understand the downloaded install
92 LXF206 January 2016
www.linuxformat.com
Answers Terminals and superusers We often give a solution as commands to type in a terminal. While it is usually possible to do the same with a distro’s graphical tools, the differences between these mean that such solutions are very specific. The terminal commands are more flexible and, most importantly, can be used with all distributions. System configuration commands often have to be run as the superuser, often called root. There are two main ways of doing this, depending on your distro. Many, especially Ubuntu and its derivatives, prefix the command with sudo , which asks for the user password and sets up root privileges for the duration of the command only. Other distros use su , which requires the root password and gives full root access until you type logout. If your distro uses su , run this once and then run any given commands without the preceding sudo .
DVD failed at the point where the installation screen should appear. I then tried a Live installation from LXF203. This booted and ran correctly so I then ran the installation from the live DVD version. This completed and I have updated as usual, so all is well except that Dolphin doesn’t show the hard drive on which the system is installed as primary, as is normal, but as a removable device. It even behaves like this so if I rightclick I get an option to safely remove the drive. Needless to say I have not tried this. What I would like to do is change the configuration to correct this anomaly. Is this possible? Where are the properties of the drive stored or are they in the kernel? Budgie Normally devices tell the OS whether they are removable or not, and you can check this by reading a file in /sys. For /dev/sda that file is /sys/block/sda/
removable and it contains a 1 if the device is removable and 0 if it isn’t. However, the situation is slightly different here. The IBM RAID controller uses the aacraid driver in the kernel and it’s that driver that marks the drive as removable. This was a design decision taken in order to avoid possible partition corruption. This means that, unless you want to patch the source for aacraid and recompile the kernel, you will have to live with this somewhat odd behaviour. There is no configuration option to change, you cannot simply write a 0 to the file in /sys, that particular file is read-only. This is really a cosmetic problem, there may be a ‘Safely remove’ option but it will not do anything because the drive is still mounted. The installation DVD’s failure may be connected, although unlikely. If you had booted without the splash screen, you may have seen the reason for the failure, but being unable to mount your disk may be the cause. It may even have failed because it appeared your computer had no fixed disks, but you would expect an error message in that case.
3
OpenVPN to ClosedVPN
I set up a Virtual Private Network (VPN) connection when I was using Ubuntu 14.04 LTS and when I did a clean installation of 14.10, and later 15.04, I was able to recreate the VPN connection. I recently did a clean installation of 15.10 and recreated the VPN connection using the same parameters as before. This time, however, it will not run and gives the error message:’…failed because the connection attempt timed out’. How can I find out why the connection attempt is timing out? Is there a way of tracking what happens
after I click on the VPN connection in the network manager icon? Chris The first step is to carefully check the details you set up. More often than not this type of problem is caused by a minor error that you don’t notice despite repeated views of the settings. The way to avoid this in future is to export your connection before upgrading, save it to a removable drive and then import it into your new system (or do an upgrade installation instead of a wipe and reinstall each time). As far as seeing what is happening, the really useful information is on the server. If you have access to that you can look at the logs to see why the connection is being rejected. It’s normal not to give a reason when rejecting a client connection – you don’t want to give clues to someone that is trying to break into you network. If you don’t have access, you may be able to ask someone who does, eg the sysadmin if you are trying to connect to a network at work. The problem is unlikely to be something as simple as an incorrect password or certificate, that would fail immediately, rather than time out, so it could be something else at your end, such as failing to unlock the certificate with your passphrase. Any local problems should appear in the system logs. Ubuntu 15.10 uses Systemd so all of this will be in the journal. You can view relevant entries in the journal in real time by running this command: $ journalctl --lines=0 --follow /usr/sbin/ NetworkManager Then try to set up and establish the connection. All new messages from Network Manager will appear in the terminal, hopefully
A quick reference to... you may have wondered why Linux uses so much of your memory and why the memory consumption increases steadily as you use it, until you have very little free. The reason is that Linux uses memory that would otherwise be sitting idle as disk caches. When you save a file to your hard disk, it’s actually saved to the disk cache and then written to the disk in the background as soon as the system load permits it. You can see this most clearly when writing to a USB flash drive with an LED. After the GUI appears to complete the copy, the LED on the drive may flash for some time as data is actually written to it. This gives you a more responsive system, but also explains why it’s important to shut down properly and unmount all filesystems to ensure everything in the caches is flushed to the disks.
RAMming speed rocessors these days are fast, really fast. Unless you are gaming, rendering video or compiling large programs, you are unlikely to use more than a fraction of your available horsepower. More time is lost on I/O these days, transferring large amounts of data to and from storage devices. One way of greatly speeding up this process is to switch from spinning disks to SSDs, which are much faster. However, they are also far more expensive, especially if you have a lot of data. There is often a cheaper, although not as effective, method of improving the situation by adding more memory. If you have looked at a system monitor, or the output from free,
P
www.techradar.com/pro
Reading benefits similarly, with files held in RAM for faster reading. All of this happens in the background, you don’t need to do anything, but you can see how effective it is with the free command $ free -h total used free shared buff/cache available Mem: 7.7G 1.6G 1.0G 245M 5.1G 5.8G Swap: 8.0G 52M 7.9G Here you can see that 1.6GB of my 8GB RAM is in use by programs, but another 5.1GB is being used for file caches. Clearly, if I had 4GB in this computer, less would be cached and performance could suffer. The upshot of all of this is that if you want to improve the performance of your computer in the most cost-effective way, adding extra RAM is often the best solution.
January 2016 LXF206 93
Answers giving you a clue as to the cause. On some distros you may need to prefix this command with sudo , but Ubuntu allows access to the system journal by the admin user. You can also see all messages logged by Network Manager since the computer booted with: $ journalctl --boot /usr/sbin/NetworkManager
4
3 monitors, 1 card
I would like to have three monitors using the same graphics card. I have researched this topic for some time now but I don’t seem to be able to find an answer. I don’t play games and I don’t need any 3D support. I use my computer only for development. My motherboard is an Asus P8H77-M/Pro, which has three PCIe expansion slots: 1x PCIe 3.0/2.0 x16 (blue), 1x PCIe 2.0 x16 (x4 mode, black) and 1 x PCIe 2.0 x1. I use Linux Mint and Xfce – if that has anything to do with anything. I prefer HDMI output but I think DVI output is also sufficient or a combination of these two. Antti-Pekka Meronen While many graphics cards have three or more outputs, they don’t all let you use all three at once. Your motherboard is an example of this, it has DisplayPort, HDMI, DVD-D and VGA. Forget about the VGA as it will not output at the higher resolutions the graphics chip is capable of, but you can use any two of the other three. That leaves you with two choices. You can buy a high-end card with support for three or more monitors and drive them all from that card. Alternatively, you can buy a less expensive card that provides two outputs and connect two monitors to the motherboard and one to the card (or vice versa). As you only need plenty of screen space and not maximum performance, a high-end card would seem to be an unnecessary expense. As far as the connectors are concerned, the three digital connections are largely equivalent
Star Question +
Winner!
Strange sudo behaviour
This month’s winner is John Warburton. Get in touch with us to claim your glittering prize!
Wrong Windows
One of my computers dual-boots Ubuntu and Windows 7 (my wife has been using it and hasn’t noticed), I tried the free upgrade but on reboot it’s still looking for Windows 7 not 10. How can I restore the MBR so it can’t see Linux? I can then upgrade and restore Grub. John Warburton There is no need to restore the MBR, that will make your Linux installation unbootable. You can continue to dualboot with Windows 10. You are currently using
94 LXF206 January 2016
so it doesn’t matter which you use. You can buy adaptors to convert between the various layouts or you can get cables that have different connectors on each end. A number of us here use the latter approach on a dual monitor setup, with one pure HDMI cable and one DVI to HDMI cable. One thing to bear in mind when using HDMI is that it can also carry audio. As a result, connecting an HDMI cable to Instead of recreating all your network and other settings your motherboard will usually from scratch, export them before reinstalling, or use a route audio over that cable separate home partition. and not through the speaker ports. If you want to continue to use separate of that is done as the user running the shell. speakers, you will probably need to change So first the script runs $ sudo echo 500 – your mixer settings to direct the audio through this is run as your user but it calls the sudo the separate outputs. command to get elevated privileges. Then it redirects the output, but this part is also 5 running as your user, which is why you get the permission error message. In effect, you are I am trying to change my laptop’s running the wrong half of the command with backlight brightness from the root privileges. When you used su , you command line (I want to do it from a opened a subshell running as root so script eventually) like this: permissions were never involved. $ sudo echo 500 >/sys/class/backlight/intel_ So how do you send text to a file as root backlight/brightness from a user terminal? The tee command When I try this I get a ‘permission denied’ sends its input to standard output and to a error, but if I use su instead of using sudo named file, and you can run it with sudo to it works: deal with the permissions, like this $ su $ echo 500 | sudo tee /sys/class/backlight/ $ echo 500 >/sys/class/backlight/intel_ intel_backlight/brightness backlight/brightness Note that tee overwrites any contents the $ exit file may already have, as with > . That is what I don’t want to use su in my script, you want here, but if you want to add to a file, because that means it will ask for the root password, so why does one way work but not as with >> , you must use tee -a . You really only want the output to go to the file and not the other? be repeated in the terminal, so you can redirect Jason Brown that to /dev/null: It’s all about how the shell parses the $ echo 500 | sudo tee /sys/class/backlight/ commands that you give it. When you intel_backlight/brightness >/dev/null use redirection, the shell runs the Now you are using sudo and not su , there is command and then redirects its output, and all
Grub to boot, but it will not know that you have switched to Windows 10, so you need to tell it. Boot into Ubuntu, open a terminal and run $ sudo update-grub . This will scan your system for installed operating systems and pick up on Windows 10, so now you should see it in the boot menu. You may run into issues booting Windows 10 without UEFI while Grub has no problems either way. As a result you may need to turn UEFI back on, but you will need to leave Secure Boot disabled in order to boot Ubuntu. It appears that you want this computer to default to booting to Windows, which is your
www.linuxformat.com
choice we suppose, in which case you may need to change the settings for GRUB_ DEFAULT in /etc/default/grub. This specifies which of the menu options for Grub to use as default and can be either be the number of the entry – counted from zero – or the title as shown in the boot menu. So if Windows 10 is the third option in the menu, you could use either of GRUB_DEFAULT=2 GRUB_DEFAULT=’Windows 10’ After making any changes to /etc/default/ grub, you need to run update-grub again to apply them to the menu.
Answers no need to use the root password. If you want, you can go a little further and enable your user to run this command without using their password, which is probably wise if you want to script it, by adding this line to /etc/sudoers. USERNAME ALL = NOPASSWD: /usr/bin/ tee /sys/class/backlight/intel_backlight/ brightness Replace USERNAME with your user and you will not be asked for the password when running this particular command. You must not edit /etc/sudoers directly but use: $ sudo visudo Otherwise you run the risk of locking yourself out of the system entirely with a typo.
6
Virus risks
With the increase in the number of hackers getting their teeth into Linux, not to mention, that The London Stock Exchange and Virgin Bank, have both dropped Windows like a hot potato and with the advent of the more sneakier viruses, such as the ‘Hand of Thief’ and ‘Turla’, do you have any ideas on how we can best scan for these pests and rid the system of them in the event of our computers becoming infected? Helgi
While ClamAV can be run from the command line or automatically from Cron, ClamTk provides a graphical interface.
Although it may not seem it with the recent media coverage, viruses are uncommon on Linux at the moment. There have been some proof of concept viruses, to show that it can be done rather than being aimed at causing any actual damage. There are a number of reasons for this, not least of which is that Linux appeals to the more tech-savvy users in the main. While installing Linux is not a difficult task these days, it does provide a certain level of entry barrier to newcomers. The repository (repos) system is also a great help in protecting our systems. By only installing software from recognised sources, and verified against known GPG keys, we greatly reduce the chances of picking up any malware. Software from a projects’ own website is often provided as source code too, with nowhere for a virus to hide. Incidentally, software packages on the Linux Format DVDs are scanned for viruses. Another factor is the diversity of Linux systems. With Windows you basically have a choice of targeting Windows 7 or Windows 10, and the various sub-versions differ mainly in the amount of software that’s included. Linux distros are all different, with varying package choices, packages compiled with different settings on each distro and a multitude of software permutations. There’s enough variety when you consider standard desktop distros, but the organisations you mention use specialised forms of Linux with little chance of being infected by a virus written to attack Ubuntu desktops. The Linux security model also helps, but not as much as many claim. It’s still possible to do a lot without root permissions, so a virus or trojan installed by a user can still do damage. This happened some years ago when a screensaver package was uploaded to gnomelook.org and included malware to be used in denial of service attacks. As it was supplied as a Deb file, any user would have been prompted
Help us to help you We receive several questions each month that we are unable to answer, because they give insufficient detail about the problem. In order to give the best answers to your questions, we need to know as much as possible. If you get an error message, please tell us the exact message and precisely what you did to invoke it. If you have a hardware problem, let us know about the hardware. If Linux is already running, you can use the Hardinfo program (http://hardinfo.berlios.de) that gives a full report on your hardware and system as an HTML file you can send us. Alternatively, the output from lshw is just as useful (http://ezix.org/project/wiki/HardwareLiSter). One or both of these should be in your distro’s repositories. If you are unwilling, or unable, to install these, run the following commands in a root terminal and attach the system.txt file to your email. This will still be a great help in diagnosing your problem. uname -a >system.txt lspci >>system.txt lspci -vv >>system.txt
for a password to install it with root privileges, but such things can be installed from a tarball into a user’s home directory without any privileges. Linux is resistant to widespread virus infection, but not immune so precautions should be taken. The obvious precaution isn’t to install from untrusted sources. The next is to regularly scan your system for viruses and rootkits. The two programs we’d suggest using are ClamAV for viruses and Rootkit Hunter for other malware. These are only truly reliable if installed and first used on a system you know to be clean, fresh install. ClamAV also scans for Windows viruses, which is important if you share files with Windows users. If you run a mail server ClamAV can check attachments for infection, it can also work with normal mail clients to make sure mails you forward to others are not infected. Rootkit Hunter is a command-line program that should be run regularly from Cron. ClamAV can be run the same way but it also has a GUI called ClamTk for a more interactive check. LXF
Frequently asked questions…
Webmin
W
e often recommend using the shell to run commands on a Linux system. While many distros provide graphical configuration tools for some tasks, they often work differently, while the underlying shell commands are consistent across all distros. By showing you the shell commands to use to perform a task, the information we give applies to all distros and can even be used on a remote computer using SSH.
However, we understand that not everyone is particularly comfortable with using the shell, even though it is a valuable skill to learn, but there is a more universal graphical alternative. It’s called Webmin (www.webmin.com) and is almost certainly in your distro’s software repos. As the name implies, this is a web-based administration tool. It runs on your computer, there is no cloud involved, and has its own built-in web server. That means you don’t have to worry about installing
something like Apache, or concern yourself with how it may interact with any existing web server you run. Once installed and started (it is usually started as a system service) you can point your browser at http:// localhost:10000 and start exploring the many options available. You may need to use https://localhost:10000, depending on how Webmin is configured by your distro. Webmin will ask for a user and password, which normally needs to be your root user to be able to change system settings. If you do not have a root password, for
www.techradar.com/pro
example on an Ubuntu system, you can set one by opening a terminal one more time and running $ sudo passwd .
January 2016 LXF206 95
On the disc Distros, apps, games, books, miscellany and more…
The best of the internet, crammed into a phantom-zone like 4GB DVD.
Distros
H
ere in the UK, we have yet another piece of surveillance legislation going through Parliament, the Investigatory Powers Bill. This will make mass surveillance legal and even require ISPs to keep records of their customers’ online activities for the past year. The only good thing about this is that at least it’s now out in the open, after the revelations of recent years regarding government mass surveillance, and is being discussed in an open forum. Not everyone lives in a society where basic liberties are acknowledged or even discussed, so we should take care to preserve those rights. It is quite timely that we have the latest release of Tails on this month’s DVD. This uses open source’s ability to take pieces from various projects and combine them in a different way to achieve a specific goal. Tails is a Debian system at its heart, but the core technology used is Tor, which was originally developed by the US Navy but then open sourced, so now anyone can have secure and anonymous communications. Of course, anonymity can be used to hide illicit activities, but it can also be used to preserve basic rights to privacy.
Mature distro
Fedora 23 Fedora has been around for a long time in one form or another. It was first released in 2003, under the name Fedora Core, although the Core part was later dropped (hardly anyone used it anyway). But it goes further back than that as Red Hat Linux (as opposed to Red Hat Enterprise Linux, the commercial product). With such a long pedigree, you would expect a mature and polished distribution (distro) and that is what you get. If you haven’t tried Fedora before, don’t get the idea that this is a staid commercial type distro, Fedora is used by Red Hat as a proving ground for new technologies, so it’s generally very bleeding edge, eg Fedora was the first distro to use Systemd and the next release may well be the first major distro to default to Wayland instead of X for the graphical desktop. They were also the first major distro to use Gnome 3 although Fedora supports many different desktop choices and their massive installation DVD covers them all. This is a live version of the distro that includes only Fedora’s preferred desktop, Gnome 3. You can still install it from this DVD to get a Gnome Fedora system, there is an install icon in the Activities bar. Once you have Fedora running from your hard drive, you can install further desktops in the Software Manager if Gnome isn’t to your liking. Red Hat (and its derivatives such as CentOS) is one of the key distros in the corporate
sector, and while Fedora is not the same, it is far more forward looking. It shares much with its commercial cousin, so this distro can not only be fun to use but may help you if you are looking for a career in Linux. The boot process can appear to hang with some older graphics cards, we experienced it with an old Nvidia card. It looks like nothing is happening but the system is actually trying various options and will complete booting after a delay. The boot splash screen hides this activity, making it look like Fedora has hung or crashed, but if you press the Esc key, you can see that things are still happening. Once it gets past this delay, Fedora will boot as normal, and this only affects the live system, not an installed distro.
Fedora comes with the Gnome 3 desktop by default, but you can install an alternative in the Software Manager.
Important
NOTICE! Defective discs
In the unlikely event of your Linux Format coverdisc being in any way defective, please visit our support site at www.linuxformat.com/dvdsupport for further assistance. If you would prefer to talk to a member of our reader support team, email us at discsupport@futurenet. com or telephone +44 (0) 1225 687826.
96 LXF206 January 2016
64-bit
www.linuxformat.com
New to Linux? Start here
What is Linux? How do I install it? Is there an equivalent of MS Office? What’s this command line all about? Are you reading this on a tablet? How do I install software?
Open Index.html on the disc to find out (Probably) the most popular distro
64-bit
Ubuntu 15.10
Tails 1.7 Privacy is important, although until recently we tended to take it for granted. There are many legitimate reasons you may want to keep your online activities private, after all we all use SSL when sending financial information. Tor is a technology designed to anonymise all your online activity by routing your traffic through multiple computers to hide your IP address. It was initially designed by the US Navy and DARPA to secure intelligence communications.
And more! System tools Essentials Checkinstall Install tarballs with your package manager.
Despite occasionally making significant user interface changes that upset some users, Ubuntu has remained one of the most popular Linux distros of recent years. It is difficult to measure distro usage with any great accuracy because of the very nature of free software, but there’s no doubt that Ubuntu is still highly regarded, and often used as the base for tutorials in Linux Format. So we make no apologies for including 15.10 on the DVD again, this time in its unexpurgated 64-bit form as, love it or otherwise, this is the go-to distro for so many.
Anonymity distro
Download your DVD from www.linuxformat.com
Coreutils The basic utilities that should exist on every operating system. HardInfo A system benchmarking tool. Kernel Source code for the latest stable kernel release, should you need it. Memtest86+ Check for faulty memory. Plop A simple manager for booting OSes, from CD, DVD and USB. RawWrite Create boot floppy disks under MS-DOS in Windows. Smart Boot Manager An OS-agnostic manager with an easy-to-use interface.
32-bit & 64-bit
WvDial Connect with a dial-up modem.
Reading matter Bookshelf Advanced Bash-Scripting Guide Go further with shell scripting.
Tails is a live distro built around this. As it is a live distro, Tails not only leaves no trace of your activities online, but also saves nothing to the computer running it. All cookies and temporary data are stored in memory and wiped when the distro is shut down. This means it’s also useful for doing things like online banking on someone else’s computer. It means you don’t have to worry about any malware being installed on the computer, or leaving any potentially sensitive information behind.
Bash Guide for Beginners Get to grips with Bash scripting. Bourne Shell Scripting Guide Get started with shell scripting. The Cathedral and the Bazaar Eric S Raymond’s classic text explaining the advantages of open development. The Debian Administrator’s Handbook An essential guide for sysadmins. Introduction to Linux A handy guide full of pointers for new Linux users. Linux Dictionary The A-Z of everything to do with Linux. Linux Kernel in a Nutshell An introduction to the kernel written by master hacker Greg Kroah-Hartman. The Linux System Administrator’s Guide Take control of your system. Tools Summary A complete overview of GNU tools.
www.techradar.com/pro
January 2016 LXF206 97
Get into Linux today! Future Publishing, Quay House, The Ambury, Bath, BA1 1UA Tel 01225 442244 Email [email protected] 19,000 January – December 2014 A member of the Audit Bureau of Circulations.
EDITORIAL
Editor Neil Mohr [email protected] Technical editor Jonni Bidwell [email protected] Operations editor Chris Thornett [email protected] Art editor Efrain Hernandez-Mendoza [email protected] Editorial contributors Neil Bothwick, Jolyon Brown, Matthew Hanson, Alastair Jennings, Nick Peers, Les Pounder, Lily Prasuethsut, Mayank Sharma, Shashank Sharma, Matt Swider, Alexander Tolstoy, Mihalis Tsoukalos Illustrations Shane Collinge, Magic Torch
ADVERTISING
Advertising manager Michael Pyatt [email protected] Advertising director Richard Hemmings [email protected] Commercial sales director Clare Dove [email protected]
MARKETING
Marketing manager Richard Stephens [email protected]
LXF 207
Escape from
will be on sa le Tuesday 19 Jan 2016
Windows 10! Looking to flee the walled garden of Microsoft or want to help trapped friends? We can help…
PRODUCTION AND DISTRIBUTION
Production controller Marie Quilter Production manager Mark Constance Distributed by Seymour Distribution Ltd, 2 East Poultry Avenue, London EC1A 9PT Tel 020 7429 4000 Overseas distribution by Seymour International
LICENSING
Senior Licensing & Syndication Manager Matt Ellis [email protected] Tel + 44 (0)1225 442244
CIRCULATION
Trade marketing manager Juliette Winyard Tel 07551 150 984
SUBSCRIPTIONS & BACK ISSUES
UK reader order line & enquiries 0844 848 2852 Overseas reader order line & enquiries +44 (0)1604 251045 Online enquiries www.myfavouritemagazines.co.uk Email [email protected]
THE MANAGEMENT
Managing director, Magazines Joe McEvoy Group editor-in-chief Paul Newman Group art director Steve Gotobed Editor-in-chief, Computing Brands Graham Barlow LINUX is a trademark of Linus Torvalds, GNU/Linux is abbreviated to Linux throughout for brevity. All other trademarks are the property of their respective owners. Where applicable code printed in this magazine is licensed under the GNU GPL v2 or later. See www.gnu.org/copyleft/gpl.html.
Open source in art Discover how open source software is driving a new generation of artists, designers and creatives.
Copyright © 2015 Future Publishing Ltd. No part of this publication may be reproduced without written permission from our publisher. We assume all letters sent – by email, fax or post – are for publication unless otherwise stated, and reserve the right to edit contributions. All contributions to Linux Format are submitted and accepted on the basis of non-exclusive worldwide licence to publish or license others to do so unless otherwise agreed in advance in writing. Linux Format recognises all copyrights in this issue. Where possible, we have acknowledged the copyright holder. Contact us if we haven’t credited your copyright and we will always correct any oversight. We cannot be held responsible for mistakes or misprints. All DVD demos and reader submissions are supplied to us on the assumption they can be incorporated into a future covermounted DVD, unless stated to the contrary.
Encode this!
Disclaimer All tips in this magazine are used at your own risk. We accept no liability for any loss of data or damage to your computer, peripherals or software through the use of any tips or advice.
Want to encode some high-quality HD video but without the hassle? We look into FLOSS options.
Backups made easy We test the tools that make backing up easy peasy, so you don’t need to sweat it again.
Printed in the UK by William Gibbons on behalf of Future.
Future is an award-winning international media group and leading digital business. We reach more than 49 million international consumers a month and create world-class content and advertising solutions for passionate consumers online, on tablet & smartphone and in print. Future plc is a public company quoted on the London Stock Exchange (symbol: FUTR). www.futureplc.com
&KLHIH[HFXWLYHRIÀFHUZillah Byng-Thorne Chairman Peter Allen &KLHIÀQDQFLDORIÀFHUPenny Ladkin-Brand Tel +44 (0)207 042 4000 (London) Tel +44 (0)1225 442 244 (Bath)
We are committed to only using magazine paper which is derived from well managed, certified forestry and chlorine free manufacture. Future Publishing and its paper suppliers have been independently certified in accordance with the rules of the FSC (Forest Stewardship Council).
Contents of future issues subject to change – we might be trapped in a tunnel under castle Microsoft.
98 LXF206 January 2016
www.linuxformat.com
9001