Announcement

56 articles, 2016-07-18 18:00 1 In the wake of UK Brexit vote, ARM Holdings is to be bought by Softbank for $32 billion The technology industry in the UK was rocked by the historic Brexit (3.15/4) vote in the referendum about membership of the EU just a few weeks ago. Concerns were voiced that tech companies would scramble to leave the UK, and with Japan's Softbank Group due to buy UK... 2016-07-18 08:53 1KB feeds.betanews.com 2 US Army Will Miss Windows 10 Upgrade Deadline Migration to complete in second quarter of 2017 2016-07-18 11:21 1KB (1.02/4) news.softpedia.com 3 "stands behind" Galaxy S7 active IP68 rating, despite failing Consumer Reports tests (1.02/4) Earlier this month, Consumer Reports said two Galaxy S7 active handsets had failed its water immersion test, despite the device being marketed as water-resistant - and Samsung has since responded. 2016-07-18 11:10 1KB feedproxy.google.com 4 S7 edge Olympic Games Edition Available for Purchase The smartphone was listed on multiple retail websites 2016-07-18 08:56 2KB news.softpedia.com (1.02/4)

5 Advanced Concepts of Java Garbage Collection Explore some of the areas of memory management, along with the (0.02/4) APIs related to garbage collection. 2016-07-18 00:00 8KB www.developer.com 6 Exploring the Java String Tokenizer Gain a comprehensive understanding of the background concepts of tokenization and its implementation in Java. 2016-07-18 00:00 5KB (0.01/4) www.developer.com 7 Understanding Mapping Apps on the Android Platform Learn how to get started building mobile applications on the Android (0.01/4) platform using Google Maps. 2016-07-18 00:00 5KB www.developer.com 8 Ransomware: How Should IT Respond? How should IT respond to increased ransomware attacks? John Pironti, President of IP Architects, spoke with Senior Editor Sara Peters at this year's Interop about the issues. 2016-07-18 14:46 1KB www.informationweek.com 9 Japan companies seek hipness through teens posting to Vine What's helping turn Japanese youngsters into stars on Vine, the Twitter-owned social network devoted to looping, six-second video clips, is the stodginess of this nation's business world. 2016-07-18 14:49 6KB phys.org 10 Raspberry Pi Compute Module To Be Upgraded To 3 Programming book reviews, programming tutorials,programming news, C#, Ruby, Python,C, C++, PHP, Visual Basic, Computer book reviews, computer history, programming history, joomla, theory, spreadsheets and more. 2016-07-18 14:47 3KB www.i-programmer.info 11 Microsoft says Windows 10 is a hit but many users disagree Microsoft says Windows 10 is a hit with its customers. But many Windows users beg to differ. 2016-07-18 14:46 6KB phys.org 12 Experts split on what popularity of Pokemon Go means for future of gaming and entertainment Normally, it's a short 10-minute walk from the office to the Arts Building on the University of Alberta's north campus. But on this day, there are several not-so-real-world distractions that drag things to a slow-Poke crawl. 2016-07-18 14:46 6KB phys.org 13 Core Tor Contributor Leaves Project, Plans to Shut Down Crucial Server "Ethics" mentioned in his decision announcement 2016-07-18 11:40 2KB news.softpedia.com 14 Website of Remote Admin App Compromised Over and Over Again to Spread Malware It may be a good idea to stay away from this software 2016-07-18 11:10 2KB news.softpedia.com 15 Xiaomi’s Working on a Windows 10 Laptop to Make the MacBook Air Irrelevant New model seems to be inspired by the MacBook Air 2016-07-18 10:52 1KB www.softpedia.com 16 Top 10 most read: V3 Technology Awards, Linus Torvalds' epic rant and Cat S60 review Top stories of the past week on V3,Mobile Phones,Operating Systems,Security,Cloud Computing ,Amazon Web Services,Linux,Digital Leaders 2016-07-18 10:40 2KB www.v3.co.uk 17 Try FocusWriter - the perfect tool for creative writing Block out distractions and give your imagination space to bloom 2016-07-18 10:30 3KB www.techradar.com 18 Opera falls into Chinese hands Keys components of Opera Software are to be taken over by a Chinese business consortium. A planned $1.24 billion takeover of the entire operation fell through after failing to gain regulatory approval, but a new deal has been struck in its place. Keys components of... 2016-07-18 10:26 1KB feeds.betanews.com 19 Older Brits like to shop on tablets Tablets might have a rough time ahead of them, but if you ask UK’s consumers, aged 55 and above, they’re quite nice to use for shopping. Tablets might have a rough time ahead of them, but if you ask UK’s consumers, aged 55 and... 2016-07-18 10:24 2KB feeds.betanews.com 20 Microsoft Band 2 now on sale for $144.99 Last week, Amazon offered the Band 2 exclusively to those with a Prime subscription for $144.99. Now, Microsoft's wearable device is available at that price for all buyers - but only in one size. 2016-07-18 10:24 1KB feedproxy.google.com 21 Opera's Chinese Takeover Fails as Purchase Price Gets Slashed in Half Opera's price tag falls from $1.2 billion to $600 million 2016-07-18 10:15 2KB news.softpedia.com 22 The best free antivirus software 2016 Download the best free antivirus tools to secure your privacy and your files 2016-07-18 10:13 7KB www.techradar.com 23 Samsung Galaxy Note 7 with Always-On Display Option Pops Up in Picture Images of the smartphone have been leaking for weeks 2016-07-18 10:05 2KB news.softpedia.com

24 80,000 Users (and Counting) Want Pokemon Go on Windows Phone The petition reaches new record number of signatures 2016-07-18 09:47 1KB www.softpedia.com 25 Stampedo ransomware available for just $39 A new variant of ransomware has been found for sale on the dark web for an incredibly low price that allows its victims 96 hours to pay a fee. A new variant of ransomware has been found for sale on the dark web for an incredibly low price that... 2016-07-18 09:31 2KB feeds.betanews.com 26 Microsoft Releases Patch to Block Linux from Running on the Original Surface RT Vulnerability in bootloader allowed other OSes on Surface 2016-07-18 09:18 2KB news.softpedia.com 27 Android 6.0.1 Marshmallow now available for Samsung's Galaxy Tab S2 on AT&T AT&T is now upgrading Samsung's Galaxy Tab S2 to Android 6.0.1, and in addition to the usual Marshmallow improvements, the carrier's update also brings support for NumberSync to the device. 2016-07-18 09:14 1KB feedproxy.google.com 28 Acer Windows 10 Mobile flagship goes on sale in US for $649, including dock, mouse, keyboard Ten and a half months after it was first announced, Acer's Liquid Jade Primo has finally gone on sale in the US, priced at $649, which includes a desktop dock for Continuum, plus a mouse and keyboard. 2016-07-18 08:58 2KB feedproxy.google.com 29 Microsoft Brings iOS Exclusive App to Android, Windows Phone Version Uncertain Beta version of Flow now available on Android 2016-07-18 08:45 1KB news.softpedia.com 30 Moto Z Play Leaks in Image Showing USB Type-C Port The images also show a 3.5mm headphone jack 2016-07-18 08:25 2KB news.softpedia.com 31 EU Data Protection Law May End The Unknowable Algorithm Slated to take effect as law across the EU in 2018, the General Data Protection Regulation could require companies to explain their algorithms to avoid unlawful discrimination. 2016-07-18 08:06 6KB www.informationweek.com 32 Watch Ben Heck tear down the ultra-rare Nintendo PlayStation prototype A little over a year ago, the retro gaming community was treated to the discovery of an ultra-rare Nintendo PlayStation. The unreleased machine, the result of a partnership between Nintendo and Sony that unfortunately went sour, was proven to be… 2016-07-18 07:30 1KB www.techspot.com 33 Microsoft Develops Intelligent Camera App for Apple’s iPhone Microsoft Pix is not yet available for download, though 2016-07-18 07:10 2KB news.softpedia.com 34 Nokia to Release Two Android Smartphones with Snapdragon 820 The smartphones will reportedly come with 2k display 2016-07-18 07:05 2KB news.softpedia.com 35 Mandelbrot Fractal is a pure JavaScript fractal explorer Mandelbrot Fractal is an open-source fractal generator with a difference: its spectacular images are produced using pure JavaScript, no external libraries or other oddball dependencies involved. Mandelbrot Fractal is an open-source fractal generator with a difference: its spectacular images are produced using pure JavaScript... 2016-07-18 06:53 1KB feeds.betanews.com 36 Windows 10 Redstone 1 to Launch in Waves, Not Everyone Will Get It on August 2 Windows Insider head explains the release schedule 2016-07-18 06:03 2KB news.softpedia.com 37 Success of Pokemon GO adds impetus for change at Nintendo The phenomenal success of "Pokemon Go" and the surge in Nintendo's market value has been seized upon by one of its most vocal investors. 2016-07-18 03:02 5KB www.cnbc.com 38 Health startup Lifesum raises $10M round led by Nokia Growth Partners What do you get if you combine the broad trends of smartphones, wearables, Internet of Things, an individual desire for control and healthcare costs for.. 2016-07-18 00:00 2KB feedproxy.google.com

39 UK government warned to act fast on Brexit risks A UK parliamentary committee looking at issues pertaining to the digital economy has warned of multiple risks in the wake of last month's referendum vote to.. 2016-07-18 00:00 7KB feedproxy.google.com 40 MarketInvoice, the UK invoice finance platform, raises another £7.2M MarketInvoice, which plays in the peer-to-peer lending space by enabling U. K. businesses to raise money from institutional investors and high net worth.. 2016-07-18 00:00 2KB feedproxy.google.com 41 Opera renegotiates its $1.2B sale down to $600M for its browsers, privacy apps, Chinese JV Some more developments over at Opera, the browser company based out of Norway. The company announced that an offer to acquire the company for $1.2.. 2016-07-18 00:00 5KB feedproxy.google.com 42 Can we protect against computers being fingerprinted? Imagine that every time a person goes out in public, they leave behind a track for all to see, so that their behaviour can be easily analysed, revealing their identity. 2016-07-18 00:00 2KB phys.org 43 What Is Maven? Study the aspects of Maven, Apache's new software that provides comprehensive software project management. 2016-07-18 00:00 20KB www.developer.com 44 Streamline Your Understanding of the Java I/O Stream Learn to streamline your understanding of I/O streams APIs in Java. 2016-07-18 00:00 9KB www.developer.com 45 Using the Executor Framework to Deal with Java Threads Examine the Java core framework and its uses with a little background idea to begin with. 2016-07-18 00:00 6KB www.developer.com

46 What Is Full Stack Development? Being a full stack engineer represents being able to acquire a vast knowledge of many pieces of your Web application. Learn those skills here. 2016-07-18 00:00 6KB www.developer.com 47 Using Angular Typeahead Learn to populate a typeahead dynamically from a Web service; the data displayed will be in tabular format with headers. This is implemented using Angular JS. 2016-07-18 00:00 3KB www.developer.com 48 Working with Java Optional Classes The Optional class in Java is not essential, yet it provides invaluable help in some instances. Master its use here. 2016-07-18 00:00 5KB www.developer.com 49 Introducing ASP. NET Core Dependency Injection Become proficient with the DI features of ASP. NET Core 1.0 so that you can quickly use them in your applications. 2016-07-18 00:00 5KB www.developer.com 50 A Deeper Look: Java Thread Example Become more familiar with some concepts that would aid in better understanding Java threads, eventually leading to better programming. 2016-07-18 00:00 9KB www.developer.com 51 The Key to AI Automation: Human-Machine Interfaces Machine intelligence is enabling banks to automate more tasks than ever before. The key to making these technologies successful is effective human interfaces. 2016-07-18 00:00 9KB www.developer.com 52 The Value of Doing APIs Right: A Look at the SiriKit API Demoware is getting its own API, and this might open up new visas for its use. 2016-07-18 00:00 6KB www.developer.com 53 What Is Jenkins? Leap into Jenkins, an open source project written in Java and dedicated to sustaining continuous integration practices. 2016-07-18 00:00 16KB www.developer.com 54 Top 10 Reasons to Get Started with React. JS Study some reasons why you should choose the React. JS framework for your next project. 2016-07-18 00:00 7KB www.developer.com

55 Tips for MongoDB WiredTiger Performance Tuning Learn about some of the parameters you can tune to optimize the performance of WiredTiger on your server. 2016-07-18 00:00 4KB www.developer.com 56 Serverless Architectures on AWS: Monitoring Costs Monitoring your costs is always a big concern. Become better equipped to do so. 2016-07-18 00:00 7KB www.developer.com Articles

56 articles, 2016-07-18 18:00

1 In the wake of UK Brexit vote, ARM Holdings is to be bought by Softbank for $32 billion (3.15/4) The technology industry in the UK was rocked by the historic Brexit vote in the referendum about membership of the EU just a few weeks ago. Concerns were voiced that tech companies would scramble to leave the UK, and with Japan's Softbank Group due to buy UK chip-maker ARM Holdings for $32 billion (£24 billion), this could just be the start of things. ARM chips are found in mobile devices produced by Apple and Samsung, and more recently it has branched out into the Internet of Things. But while some will be unhappy with the change of ownership, Softbank says that it will not only remain headquartered in Cambridge, UK, but will look to at least double its UK workforce. News of the acquisition saw ARM share prices leap by 45 percent, seeing the company's value jump by $10 billion (£7.56 billion). The boards of both ARM and Softbank are recommending that all cash deals go ahead, but regulatory approval will still be required before this can happen. Stuart Chambers, Chairman of ARM said: Softbank CEO Masayoshi Son commented on the acquisition, saying: While ARM may be remaining in the UK, the change of ownership will remain a concern for many. The BBC suggests that "Britain's best hope of building a global technology giant now appears to have gone". 2016-07-18 08:53 By Mark

2 US Army Will Miss Windows 10 Upgrade Deadline (1.02/4) Army Chief Information Officer Lt. Gen. Robert Ferrell has revealed that the Windows 10 upgrade process is now scheduled to complete in the second quarter of 2017 instead of January 2017, so the Army needs approximately six more months to move to Windows 10. “In Europe now we are focused on the early adopters. I think we are testing about 13 instruments now and in [the United States] about the same. I think by… the second quarter of next year we’ll have Europe completed with the transition and then we will focus on [the United States,” Ferrell is quoted as saying by Federal News Radio. In the case of legacy systems where more work needs to be done, the Army cannot provide a specific deadline for the upgrade and says that the final costs of the transition to Windows 10 are very likely to be increased. No specifics or estimates have been provided, though. “As we begin our controlled rollout of Windows 10, we will collect and analyze these cost drivers, which will inform our budget program objective memorandum cycle and ensure we have the resources needed to complete the transition,” an Army spokesman has said. Microsoft engineers worked together with Army experts to ensure a smooth upgrade process for a number of PCs, but also to find ways to migrate PCs with software that might not be supported, but the transition is still hitting roadblocks that could push the deadline half a year back. 2016-07-18 11:21 Bogdan Popa

3 Samsung "stands behind" Galaxy S7 active IP68 rating, despite failing Consumer Reports tests (1.02/4) Samsung has issued an official statement, responding to concerns surrounding the water-resistance capabilities of one of its newest devices. Last month, Samsung launched the Galaxy S7 active , a 'ruggedized' version of its Galaxy S7 flagship with IP68 certification, making it both dust- proof and water-resistant to a depth of five feet for up to 30 minutes. But earlier this month, independent product review journal Consumer Reports said that two Galaxy S7 active handsets had failed its water immersion test. Both units showed clear signs of water ingress, along with malfunctioning displays. In its statement, Samsung said : After Consumer Reports' findings were published earlier this month, Samsung said that "there may be an off-chance that a defective device is not as watertight as it should be". Curiously, while the 'rugged' Galaxy S7 active failed the Consumer Reports test, its non-rugged siblings - the Galaxy S7 and Galaxy S7 edge - both passed. You can get a quick overview of Consumer Reports' assessment of the Galaxy S7 active in the video below: Source: Samsung via Android Central 2016-07-18 11:10 Andy Weir

4 Samsung Galaxy S7 edge Olympic Games Edition Available for Purchase (1.02/4) Samsung also announced that it would deliver 12,500 Galaxy S7 edge Olympic Games Edition units together with Gear IconX earbuds to all athletes who are participating in the Rio 2016 Olympic Games. Specifically, 2,016 Galaxy S7 edge Olympic Games Edition smartphones will be made available in Brazil, China, Germany, South Korea, and the United States starting today, July 18. Prices for the device seems to differ depending on the market, with US's BestBuy selling it for $849.99 while the same handset costs about $943 in South Korea. Customers will also get a Gear VR headset upon purchasing the device, and 100 randomly selected users will receive Gear IconX earbuds for free. Samsung also provided us with a video showing the unboxing of Galaxy S7 edge Olympic Games Edition and took to its website to offer some additional information on the smartphone. Apparently, the unit comes with a custom-made colorway pattern that's designed in blue, red, green, yellow, and black, the five colors of the Olympic Rings. In addition, the design features a blue theme in the rear camera, sensors and flash with green in the power buttons and yellow on the home key. The user interface also has these colors that remind customers of the Olympic Games. Moreover, the smartphone features a selection of Rio 2016-themed wallpapers that users can check out. Samsung implemented the 3D Flag application on the Olympic Games edition smartphone, which displays a 3D flag of the user's choice on the screen. 2016-07-18 08:56 Alexandra Vaidos

5 Advanced Concepts of Java Garbage Collection (0.02/4) Garbage collection (GC) is a background process run by JVM (Java Virtual Machine) to housekeep memory automatically when a Java application runs in the foreground. The presence of a garbage collector relieves the programmer of the responsibility of writing an explicit memory deallocation routine in every application they develop. This leverages productivity while coding. A programmer can focus exclusively on solving the problem at hand and let JVM handle memory management issues. Garbage collection is a complicated procedure in its own right. Once you dive deeper into the arena, you realize that there is more to it than meets the eye. This article is an attempt to explore some of those areas along with the APIs related to garbage collection. Reclaiming unused memory is a complex procedure and doing it explicitly through code can be error-prone, leading to unexpected behavior of the program. A couple of the problems can be as follows: Memory management in Java takes an alternative approach. Like most modern object-oriented languages, it uses an automatic memory manager called a garbage collector. Garbage collection has three primary functions: Java memory manager segments memory into three categories: young generation, old generation, and permanent generation. Before going further, imagine effacing memory by GC to occur in waves. Fresh, new objects are allocated in the young generation. The old generation contains those objects that have stayed in the young generation for some time; also, some large objects are directly allocated in the old generation segment. Permanent generation objects are those that GC finds easy to manage, such as objects that describe classes and methods. Young generation segments contain an area called Eden and two smaller Survivor Spaces. Eden contains those objects that have survived at least one garbage collection wave and given an opportunity to die before being moved to Survivor Spaces and ultimately to old generation status. Typically, when the young generation is filled up, a minor collection (an algorithm) wave pops up and either and does the cleaning or objects are moved to the next status. When the old generation is filled up, the major collection (another algorithm) wave does the job, which means practically all generations are collected/cleaned. This is a rudimentary idea behind GC design. Refer to "Memory Management in the Java HotSpotTM Virtual Machine, Sun Microsystems" for a detailed explanation. There are four Garbage Collection (GC) algorithm available in Java Hotspot VM. Let's get an idea of each of them in a line or two. Refer to Java Garbage Collection by N. Salnikov-Tarnovski and G. Smirnov for a detail analysis on each of the algorithm. JVM runs the garbage collector as soon as the system is low on memory. Can we garbage collector from our code? Yes, but there is no guarantee that garbage collector would listen. The gc() method in java.lang. Runtime can be used to "suggest" that JVM run garbage collector, which it may totally ignore, so no point in being sentimental here. The Java API Documentation states the method gc() as follows… "Runs the garbage collector. Calling this method suggests that the Java virtual machine expends effort toward recycling unused objects in order to make the memory they currently occupy available for quick reuse. When control returns from the method call, the virtual machine has made its best effort to recycle all discarded objects. The name gc stands for "garbage collector. " The virtual machine performs this recycling process automatically as needed, in a separate thread, even if the gc method is not invoked explicitly. The method System.gc() is the conventional and convenient means of invoking this method. " Output: (may vary in your case) Observe the change in memory size after gc() is called. The invocation to finalize() a method states a set of actions to be performed on the object just before garbage collection reclaims its used memory location. The method finalize() is a member of thee Object class which is the parent class of all classes in Java. Also, it is declared as protected; that means any class can override this method. Objects, when they go out of scope, are marked for finalization and placed on the queue before actual garbage collector reclaims the memory. If, however, you want to finalize all objects waiting to be finalized by Java run- time, you may use the runFnialization() method declared as a member of the Runtime class as well as the System class. It may be invoked either as: or An interesting aspect is that there is a quanta of time available between the object being marked for finalization and the object being actually effaced. Garbage collector comes as the next phase after finalization and checks again if the object to be effaced is still out of scope. Though not a good idea, we can try to resurrect the object between the invocation of the finalize() method and actual garbage collection as follows: This is a very bad idea and a bad programming practice as well. No harm in experimenting, though, yet never do it in real life. The finalize() method is discouraged to be used explicitly because this method is automatically called by the GC to perform a cleanup operation on an object just before GC reclaims the object's memory. GC does not guarantee any specific time of execution. It may never execute before the program terminates; thus, it is highly improbable if or when the method finalize() would be called. As it was said earlier, Java does not give exclusive control over the time when garbage collector will execute. Every method related to garbage collection is just a suggestion to JVM that it may reclaim the memory now. Similarly, the try...finally clause simply states the release of resources used in the try block. The finally block is guaranteed to execute. This ensures that the garbage collector can reclaim the memory used by the object of this class. Java offer different types of references that can be used to designate a reference object class to give a new meaning. A program may use one of these type of reference object that refers to some other object in such a manner that the objects get collected by GC exclusively, depending upon its reference type. If you are not aware of any reference types in Java, that means you have been using only strong reference types. It is ordinary reference types, such as: Apart the from strong reference, there are three other type of reference— namely, soft , weak , and phantom —denoted by the classes SoftReference , WeakReference , and PhantomReference defined in the java.lang.ref package. The reference object type implemented by these types are all subclasses of the abstract base class called Reference. " Soft references are for implementing memory-sensitive caches. " " Weak references are for implementing canonicalizing mappings that do not prevent their keys (or values) from being reclaimed. " " Phantom references are for scheduling pre-mortem cleanup actions in a more flexible way than is possible with the Java finalization mechanism. " Each of these types corresponds to a different level of reachability. According to Java API Documentation , A complete explanation of reference types require a exclusive focus on it. Let's set it aside for now. Garbage collection is a complex subsystem under the aegis of JVM where the most crucial managerial process is taken care of. It may not be perfect, but nonetheless gives a sense of freedom to the programmer from a critical responsibility. This article is an attempt to surface the intricate story behind GC on how it works. Above all, this will give you another reason to appreciate and thank all who shaped GC in its current form. GC is undoubtedly a reason to make Java a prime language, with many improvements yet to be implemented to still prod on.… 2016-07-18 00:00 Manoj Debnath

6 Exploring the Java String Tokenizer (0.01/4) String tokenization is a process where a string is broken into several parts. Each part is called a token. For example, if "I am going" is a string, the discrete parts— such as "I" , "am" , and "going" —are the tokens. Java provides ready classes and methods to implement the tokenization process. They are quite handy to convey a specific semantics or contextual meaning to several individual parts of a string. This is particularly useful for text processing where you need to break a string into several parts and use each part as an element for individual processing. In a nutshell, tokenization is useful in any situation where you need to disorganize a string into individual parts; something to achieve with the part for the whole and whole for the part concept. This article provides information for a comprehensive understanding of the background concepts and its implementation in Java. A token or an individual element of a string can be filtered during infusion, meaning we can define the semantics of a token when extracting discrete elements from a string. For example, in a string say, "Hi! I am good. How about you? ", sometimes we may need to treat each word as a token or, at other times a set of words collectively as a token. So, a token basically is a flexible term and does not necessarily meant to be an atomic part, although it may be atomic according to the discretion of the context. For example, the keywords of a language are atomic according to the lexical analysis of the language, but they may typically be non-atomic and convey different meaning under a different context. The tokens are: Now, if we change the code to the following: The tokens are: Observe that the StringTokenizer class contains three constructors, as follows: (refer to the Java API Documentation) when we create a StringTokenizer object with the second constructor, we can define a delimiter to split the tokens as per our need. If we do not provide any, space is taken as a default delimiter. In the preceding example, we have used ". " (dot/stop character) as a delimiter. Note that the delimiting character itself is not taken into account as a token. It is simply used as a token separator without itself being a part of the token. This can be seen when the tokens are printed in the example code above; observe that ". " is not printed. So, in a situation where we want to control whether to count the delimited character also as a token or not, we may use the third constructor. This constructor takes a boolean argument to enable/disable the delimited character as a part of the token. We also can provide a delimiting character later while extracting tokens with the nextToken(String delim) method. We may also use delimited character as " " to mean space, newline, carriage return, and line-feed character, respectively. Accessing individual tokens is no big deal. StringTokenizer contains six methods to cover the tokens. They are quite simple. Refer to the Java API Documentation for details about each of them. The split method defined in the String class is more versatile in the tokenization process. Here, we can use Regular Expression to break up strings into basic tokens. According to the Java API Documentation: " StringTokenizer is a legacy class that is retained for compatibility reasons although its use is discouraged in new code. It is recommended that anyone seeking this functionality use the split method of String or the java.util.regex package instead. " The preceding example with StringTokenizer can be rewritten with the string split method as follows: Output: To extract the numeric value from the string below, we may change the code as follows with regular expression. As we can see, the strength of the split method of the String class is in its ability to use Regular Expression. We can use wild cards and quantifiers to match a particular pattern in a Regular Expression. This pattern then can be used as the delimitation basis of token extraction. Java has a dedicated package, called java.util.regex , to deal with Regular Expression. This package consists of two classes, Matcher and Pattern , an interface MatchResult , and an exception called PatternSyntaxException. Regular Expression is quite an extensive topic in itself. Let's not deal with is here; instead, let's focus only on the tokenization preliminary through the Matcher and Pattern classes. These classes provide supreme flexibility in the process of tokenization with a complexity to become a topic in itself. A pattern object represents a compiled regular expression that is used by the Matcher object to perform three functions, such as: For tokenization, the Matcher and Pattern classes may be used as follows: Output: String tokenization is a way to break a string into several parts. StringTokenizer is a utility class to extract tokens from a string. However, the Java API documentation discourages its use, and instead recommends the split method of the String class to serve similar needs. The split method uses Regular Expression. There are a classes in the java.util.regex package specifically dedicated to Regular Expression, called Pattern and Matcher. The split method, though, uses Regular Expression; it is convenient to use the Pattern and Matcher classes when dealing with complex expressions. Otherwise, in a very simple circumstance, the split method is quite convenient. 2016-07-18 00:00 Manoj Debnath

7 Understanding Mapping Apps on the Android Platform (0.01/4) Modern applications increasingly are becoming location sensitive. If you open any mobile Web site or applicable, you can invariably notice that you are asked to provide permission to allow access to your location. Many of these applications rely on mapping technologies to provide you information with this context being sensitive. In this article, we will learn what is needed to get started building mapping mobile applications on Android platform with Google Maps. To build Android applications, you will need the latest version of Java SDK and Android Studio, a Visual Studio equivalent, to build Android applications. You can download the latest version of Android Studio from http://developer.android.com/sdk/index.html . When you visit that page, you should have a link to download Android Studio. Installation of Android Studio is straight forward. However, installation will not begin if you do not have JDK installed on your machine. You can get the latest version of JDK from http://www.oracle.com/technetwork/java/javase/downloads/index.html . At the time of writing the article, the hardware requirements for Android Studio were as follows for Windows machines. Once you have installed Android Studio, open SDK manager from Tools-> Android -> SDK manager menu open. Figure 1: Installing the Google Play Services library Once the SDK manager opens, select "Google Play Services," "Google USB driver," and click "Install xx packages. " You will need to agree to any prompts about license agreements before you can finish the installation. Figure 2: Accepting the license agreements Once you have done the above, the rest of the steps are conducted inside our application. Open Android Studio and create an app (the name of my app is "com.example.vipulp.myapplication"). Next, we will add Google Play Services as an Android Library project and reference them. Your Android Studio after you have created the default app will look as follows: Figure 3: The default Android Studio app Now, double-click AndroidManifest.xml in the "Project" window under the "app" > manifests folder. Figure 4: Selecting the AndroidManifest.xml file We need to add a declaration within the element: Next, we need to get the Google Maps API key. The Maps API key is based on your application's digital certificate, known as its SHA-1 fingerprint. To get the key for the "debug" application, we will need to use the Debug certificate in our development environment. To get the key for "release" application, make sure you use the Release certificate. We can get the SHA-1 information for our application by executing the following command. This will output your key: To get the Google Maps Api key, we need to register your app with Google Maps Android API 2 service. To register, you need the name of your application as well as the SHA-1 key listed above. Go to https://console.developers.google.com and create a new project. Figure 5: Creating a new project Once the project is created, you will be navigated to the project dashboard. On the left bar, click "API & auth" followed by "APIs. " You will see all the APIs for that project. Figure 6: Viewing the APIs We now find the Google Maps Android v2 API and enable it. Figure 7: Enabling the Google Maps Android v2 API Click Off to "On" it. This will enable Google Maps Android API access. Next, click "Credentials" under "APIs & Auth. " Figure 8: Clicking "Credentials" Click "Create new Key" under "Public API access. " Figure 9: Creating a new key Select Android Key. You will be prompted to a screen that needs your SHA1 key as well as the application name in a specific format (SHA1Key;ApplicationName). Enter them and click Create. Figure 10: Entering the application name Your Android Maps API Key will be generated. Figure 11: Generating your Android Maps API Key We will now copy this APIKEY in the ApplicationManifest.xml file. Create another node under and enter your key as shown next: For my case, my ApplicationManifest.xml looked as under (changed highlighted): Next, we go to Project Settings via File -> Project Structure and add the Google Play Library Services as a reference. Figure 12: Adding the Google Play Library Services as a reference Click the + sign and select Library Dependency. Figure 13: Clicking the + sign Figure 14: Selecting Library Dependency Click OK and you should now be able to build the project. Next, we need to declare a few permissions. Open ApplicationManifest.xml and provide the following permissions which are at least needed to any mapping application (you might need more). Because Google Maps depend on OpenGL, we also need to declare that: Your application manifest should look as follows: Our application is now ready to be a mapping application. In the next article, we will learn how to add a map to an Android Application and use it. In this article, we learned how to get started with the development tools and setup needed to build mobile Android mapping applications. I hope you have found this information useful. Vipul Patel is a technology geek based in Seattle. He can be reached at [email protected]. You can visit his LinkedIn profile at https://www.linkedin.com/pub/vipul-patel/6/675/508 . 2016-07-18 00:00 Vipul Patel

8 Ransomware: How Should IT Respond? In the past year, the number of ransomware attacks has surged. Even though these types of security threats have been around for years, new technology has allowed cyber-criminals to repackage these attacks against enterprises, as well as small businesses. How should IT respond? John Pironti, President of IP Architects, says he believes that in addition to raising technology and security concerns, ransomware also brings up moral and ethical questions. Pironti spoke with Senior Editor Sara Peters at this year's Interop about the issues raised by this recent rash of attacks. 2016-07-18 14:46 www.informationweek

9 Japan companies seek hipness through teens posting to Vine Japan Inc. companies, both big and small, are generally so clueless about appealing to youngsters—especially young women and especially on social networks—they need all the help they can get from teenage Viners for marketing. Reika Oozeki, 19, became a sensation overnight on Vine when she was just 17, offering snarky sketches of life. "I was studying for tests, and I was bored," says Oozeki, who started out using her cellphone to shoot videos of herself in pajamas or at school. "I was so surprised it caught on. " Now she has more than 730,000 followers and her videos have looped over viewers' screens nearly 850 million times. Most of her clips are close-ups of her face. She might coo pretending to be with a date, and then suddenly switch to a growl when she is supposedly with girlfriends. She has appeared on TV shows, got cast in a movie and is signed with a production company. She is also training to become a swimming coach for children, who adore her because she is famous on Vine. When companies approach her to make Vine clips, Oozeki is often given free rein. She is sometimes not even required to say the company name. In the clip she made for Intel Japan, she merely snarls, "Interru haitteru," the Japanese for "Intel Inside. " Vine is unique as a social network in that people post entirely video, much of it taken on cellphones. Each clip is a six-second loop. There are 200 million people who watch Vine videos every month, and, although Vine does not break down viewers by country, Japan is one of Vine's largest markets outside the U. S. Kota Furukoshi, chief executive of Tokyo-based Web marketing consulting startup Ninoya, says Japanese companies, which still tend to be dominated by old men, are generally resigned to their lack of online savvy. Instead of trying to acquire and build such skills in-house, they tend to turn for outside help for online marketing, he said. Popular Vine creators in Japan represent a break from old-style Japanese who tend to be shy, inhibited and inept at self-expression, said Kota Furukoshi, chief executive of Tokyo-based Web marketing consulting startup Ninoya. "They're very creative. They're stylish. They're sharp," Furukoshi said. "They know how to build their personalities online. " Vine translated well in Japan, unlike other companies that had a culture clash. LinkedIn, for instance, failed, and was even frowned upon in this culture where job-hopping is not as common as in the U. S. and is seen as betrayal by employers, said Furukoshi. Vine is at a disadvantage compared to YouTube or Facebook as a moneymaker because most Vine users are too young to be big spenders. But some companies—like the Japan unit of Intel and Japanese candy maker Morinaga & Co.—are using Vine, seeing it as a worthwhile investment for brand recognition. There are signs that the Vine craze may have peaked in Japan. Nobi Hayashi, who consults and writes about technology in Japan, believes Vine's trademark brevity is proving its weakness. "It becomes just one gag after the other," Hayashi said. Last month, Vine added a "watch more" option, allowing an attachment of longer video of up to 140 seconds, and up to 10 minutes for some partners. Vine is also starting to support opportunities to make money through the clips. But Japanese, like Americans, are often turning to rivals like Snapchat. And other social networks, such as Instagram and Facebook, also offer video. Oozeki says she is expanding to other platforms, especially YouTube, for self-expression. That reflects the sentiments of many of the Japanese Vine stars, who see their influence on Vine as a springboard for other online or film careers. Hokuto Ikura quit his job at a major company, moved to Tokyo from Fukuoka to become a planner at Tokyo-based Grove Inc., which recruits and supports Viners and other online creators. Vine changed Ikura's life in a personal way, too. Oozeki is now his girlfriend. He says they complement each other well because Oozeki is inspired and creative, while he is more organized and analytical. Hayatto Noguchi, with about 23,000 followers and 16 million loops or views on Vine, is hoping to leverage Vine as a springboard for his livelihood. Noguchi uses animation youtu.be/H8mnxbiNaTQ as well as the stop-motion technique of manipulating real-life objects, frames at a time, to create the illusion of movement. In one, colorful origami-like buildings pop up on a desk. In another, a likeness of Noguchi appears on top of a cake to wish a happy birthday. He has already been tapped by Intel Japan, Tic Tac mints and other companies to create Vine videos, although the pay is relatively small at a few hundred dollars (tens of thousands of yen) per post. It's a tricky process to fine-tune the looping and craft an eye-catching concept. An overly polished look can backfire because most people are tired of the slickness of TV ads and Hollywood movies, he said. Noguchi recently quit his job at a cellphone company and is devoting himself full time to Vine. He hasn't told his parents about Vine, dreaming of that day they'll find out on their own. But he has no illusions about how fleeting the Vine craze might be, and shrugs that time might be running out for him to become a self-sustaining videographer. "I think this year is it," he said. Explore further: Twitter launches iPhone video sharing app More information: Reika Oozeki on Vine: vine.co/u/996673190115913728 Hokuto Ikura on Vine: vine.co/hokuto Hayatto Noguchi on Vine: vine.co/hayatto 2016-07-18 14:49 phys.org

10 Raspberry Pi Compute Module To Be Upgraded To 3 The Raspberry Pi Compute Module is aimed at the "professional" user and is a traditional embedded system, but it is looking a little low on specification. Now we have the news that it is going to be upgraded to the Pi 3's more capable design. The first thing to say is that this news is based on a report of an IDG News Service interview with Eben Upton, founder of the Raspberry Pi Foundation and CEO of Raspberry Pi (Trading) Ltd that has been reported by a number of other news services. All of the reports seem to originate in a news item in PC World and there is no trace of the actual interview. However, it all seems very reasonble. The Raspberry Pi Compute Module is a SODIMM form package that needs a development kit to work with and some custom hardware to be created if you are actually going to incorporate it into a product. The idea is that the DIMM format allows the device to slot into a mainboard that provides additional interfaces to the outside world - think dishwasher or smart refrigerator. If you are developing an embedded industrial system then the compute module makes sense as a choice, if only because you don't have to admit to using the same Raspberry Pi hardware that every hobby developer is using! The real question is how successful the module has been. It was introduced two years ago and hasn't been updated, even though the main Pi modules have. The most recent update to the Pi 3 makes the original hardware on the current Pi Compute module look particularly under powered. The Pi 3 has lots of extras that won't make it into the Compute Module This is about to change as Eben Upton remarked that a new compute module should be available in a few months and it will be based on the hardware used in the Pi 3. The bad news is that it almost certainly won't have WiFi, which is good for power consumption but bad for versatility. As well as working with the usual Linux based operating systems, it will also support the very reduced Windows 10 that works on the Pi 3. It will be interesting to see if the new Compute Module is worth considering in the light of the Pi Zero. This is about the same size as the compute module and, while not up to the speed of the Pi 3, is as fast as a Pi 2, albeit with only one core. For many embedded applications it is very attractive, if only because of its very low price for the performance delivered. The Pi Zero also doesn't have WiFi, but it doesn't need a development kit and special hardware to get a prototype off the ground. There is no news as to the expected cost of the new Compute Module and no information on whether or not it will work with the existing development kit. A smaller version of Raspberry Pi 3 is coming soon Where Are The Raspberry Pi Zeros? To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on, Twitter, Facebook, Google+ or Linkedin. 2016-07-18 14:47 Written by

11 Microsoft says Windows 10 is a hit but many users disagree The company announced that less than a year after it launched, Windows 10 is already running on 350 million devices. Additionally, customer satisfaction with it is higher than for any previous version of the operating system , the software giant said. That may be, but it's not hard to find customers who aren't happy with the new software. A Sausalito, Calif., woman made headlines recently after she sued Microsoft and won a $10,000 judgment because an unauthorized Windows 10 upgrade basically made her PC unusable. Meanwhile, a simple Google search will turn up numerous other disgruntled users. I've heard from plenty of upset Windows customers recently after writing several columns about Windows 10. Some lost crucial data or features when their computers made the jump to the new Windows version. Some paid hundreds of dollars to computer support companies to restore their computers to earlier versions of Windows. Many were simply disgusted at the tactics Microsoft has used to push users to upgrade to Windows 10 or frustrated that their computers were updated without their consent. Chris Wood, 61, was among those upset with Microsoft after his 91-year-old father was "tricked" by the company into upgrading and couldn't figure out how to use his computer afterward. "It is very disturbing Microsoft would treat paying customers with such disrespect and disregard for what they want," Wood, a software salesman who lives in Pleasanton, Calif., said in an email. "Not giving customers a way to say no is in fact deceitful and an inappropriate business practice. " Microsoft has said it would soon change the upgrade process, making it easier for Windows 7 and 8 users to opt out of Windows 10 permanently. "Our most important priority for Windows 10 is for everyone to love Windows," Terry Myerson, executive vice president, Windows and Devices Group, said recently in a statement. "We'll continue to be led by your feedback and always, earning and maintaining your trust is our commitment and priority. " Microsoft released Windows 10 last summer. I gave it a positive review because it addresses many of the complaints that I and others had with Windows 8. The operating system is available as a free upgrade for Windows 7 and Windows 8 users until July 29. Microsoft has been heavily promoting it as the most secure version of Windows ever. It has also aggressively attempted to get it installed on consumers' machines, making it difficult for them to decline the upgrade and even going so far as to upgrade their computers without their explicit permission. But many users have been reluctant to make the jump to Windows 10 or have seen things in it they didn't like. For example, many Windows programs haven't been updated to support Windows 10. And drivers for many peripherals like printers aren't yet available for the new operating system - and may never be. That's what Hartmut Wiesenthal found when he decided to accept the upgrade offer. Although Microsoft told the 54-year-old engineer his computer was 100 percent compatible with Windows 10, he quickly found that some of his equipment wasn't. Wiesenthal couldn't print documents on his several-year-old printer after he upgraded. And his Kensington docking station no longer worked with his computer. In both cases, the Fremont, Calif., resident found there were no drivers - the software used by the computer to communicate with particular peripherals - for the devices and none were in development. What's more, when he returned his PC to Windows 7, he found that the old drivers had been wiped out in the upgrade and he needed to find and download them again. "I searched (for) them online and was lucky to find them," Wiesenthal said in an email. "I share all of this as a warning. " Windows 10 also ditched some features that some users relied on. Most notably, it doesn't include Media Center. So Windows no longer has the built-in ability to play DVDs or tune in or record television programs. Microsoft is offering a separate app to play DVDs for free to some users who upgrade to Windows 10, but others have to pay $15 for it. Being forced to pay for a feature previously included with his computer - along with all the pop-up messages pushing him to upgrade - irked Terry Grant, 74, who often watches DVDs on his computer. "It just irritated me," said the Cupertino, Calif., resident who retired from NASA after a long career there. "It seems to me, as per usual, I can expect Windows will not be putting their customers first. " Some people have faced even bigger problems after upgrading to Windows 10. After her company upgraded her computer to Windows 10, one reader said that some 600 files in her My Documents folder had been deleted. Unfortunately, her computer hadn't been set to back up the files and the tech support person at her company couldn't restore the files even after downloading a file recovery program. Microsoft does offer a relatively easy way to restore PCs that have been upgraded to Windows 10 to the previous version they were running. But users have to take advantage of that method within 30 days of being upgraded or they lose that easy option. When his computer was upgraded to Windows 10, Tony Daniels found that it no longer worked with his printer and had other, more minor glitches. The 65-year-old retired sales manager ended up paying $150 to a computer repair company to rid his machine of Windows 10. "Microsoft was doing every underhanded trick possible to get me to accept (Windows) 10," the Antioch, Calif., resident said in an email. "I don't remember where I screwed up and let these guys do this to me. " Daniels added: "If they get a class-action suit against them, count me in. " Explore further: Microsoft hoping users will get friends, family to leave Windows XP 2016-07-18 14:46 phys.org

12 Experts split on what popularity of Pokemon Go means for future of gaming and entertainment First, Jigglypuff—a plump pink ball with catlike ears— guards the elevator and proves frustratingly good at breaking free from Poké balls. Three escapes? Then, a fluttering Zubat is waiting at the bottom of the elevator. Gotcha! A few steps later, it's time to check in at a PokéStop and replenish supplies. There's another Zubat at the west end of Quad, followed by a Pidgey on the east side and then another PokéStop. A wild Shellder appears out of nowhere near Triffo Hall. Inside the Arts Building, a quick glance at the clock shows there are still 10 minutes before an interview with U of A gaming guru Sean Gouglas—time enough to bag a wild Spearow. The impulse to "catch 'em all" in Pokémon Go's blending of the virtual gaming space and real world is omnipresent, and mirrors the cultural phenomenon it's become. "They played it either with cards or TV shows or the games themselves. It's this incredibly robust game system that adults grew up with and kids are now growing up with," says Gouglas, an associate professor of humanities computing and lead instructor of the U of A's Understanding Video Games massive open online course. Couple that nostalgic appeal with the ubiquity of smartphones, the game's "amazing GPS database" by developer Niantic and favourable midsummer timing, and you have the makings of a hit, Gouglas explains. It already has more users in the United States than Twitter and though it isn't officially available here in Canada, it's been pretty easy to spot players through their seeming aimless wandering, phones in hand, across campus. Pokémon Go has managed to deliver where other augmented reality games have failed. Augmented reality uses technology—sensors and cameras—to blur the lines between the virtual and real world. It's been around for years, including in gaming (Gouglas' students even designed their own game as far back as seven years ago), but such titles are still a niche market. "AR games have been interesting but they've never married technology and amazing intellectual property quite like Pokémon Go," adds Gouglas, senior director of interdisciplinary studies in the Faculty of Arts. The blurring of virtual and reality Despite its popularity, Pokémon Go is a poor example of augmented reality because players are almost solely immersed in a virtual gaming space on their phones, with little interaction with the real world, contends Pierre Boulanger, Cisco Research Chair in Healthcare Solutions at the U of A. Augmented reality is being used by companies like Boeing and BMW for mechanical training. In the case of BMW, the user wears a head-mounted display (tricked-out goggles) that registers sensors on a vehicle engine and displays virtual images that show which parts to remove and when. "There has to be an alignment between the virtual world and the real world , and that's what a sensor does. You have to find those reference points," explains Boulanger, a professor of computing science in the Faculty of Science Boulanger's own lab is developing "projected augmented reality," a technique that combines CT scans and MRIs and projects that information onto surgical patients, to assist with medical training. "It's like X-ray vision, but with no goggles or glasses or cellphone," Boulanger says. But some of the most impressive augmented reality innovations have military applications—the F-35 jet fighter being the most advanced. Each jet is outfitted with six high-resolution infrared cameras that send information to a $400,000 head-mounted display on the pilot's helmet. Instead of relying solely on eyesight, pilots can augment their perspective using the camera's lenses to see at night—or even make the floor of the cockpit disappear to see the airspace below. (Think Wonder Woman's invisible jet.) "You have 20/20 vision as if you're flying in the air by yourself. " Advances in 3-D sensor technology and the availability of high-speed video and graphic cards means augmented reality is going to become more common, says Boulanger, who sees it as "the future" of entertainment and new media. Imagine, he says, a cinematic experience that plays out like Police Story, where you're in a remote area outfitted with a head-mounted display or cellphone and start to uncover a crime and resulting narrative. "They are truly the new media," says Boulanger, who points to big players such as Google, which is investing heavily in innovations such as its augmented reality tablet Tango . The success of Pokémon Go is good news for the gaming industry and rights holder Nintendo, which was desperate for a hit amid flagging stock values, says Gouglas, who hopes it signals a renewed vibrancy and willingness to take risks at the longtime gaming brand. But overall, he isn't convinced Pokémon Go will lead to an influx of augmented reality games or virtual entertainment given the time investment and space required. "I' universally skeptical that one game is going to change people's habits forever. There have been lots of promises about augmented reality for years. We'll see," he says. "It's a hard sell when some people just want to sit and be inactive. " It's likely just a matter of time before Pokémon Go's popularity fizzles or gives way to the next new gaming or digital phenomenon. But until then, it's game on for millions of fans. Just remember to look up once in awhile—and plan your time accordingly! Explore further: The Pokemon GO craze sees gamers hit the streets but it comes with a warning 2016-07-18 14:46 phys.org

13 Core Tor Contributor Leaves Project, Plans to Shut Down Crucial Server Green is one of the people who were part of the Tor Project before the network was known as Tor, to begin with. He is also one of the people who ran one of the first five nodes ever introduced in Tor, and across years, he has gained the trust of the Tor Project to allow him to manage special nodes inside the network. These nodes, called Bridge Authorities, have their IPs hard-coded in the Tor applications and allow the Tor network to go around various bans and blocking attempts at the ISP level. They also hold precious information regarding other Tor nodes, since all Tor servers added to the network report back to one of the Bridge Authorities. Green is giving the Tor Project until August 31, 2016 to issue a new update of the Tor apps and remove the IP of his Bridge Authority, known internally as the Tonga node. Besides the highly important Tonga node, Green also plans to shut down the other five Tor servers he is also running. Green didn't give too many clues about the reasons he decided to do this. "Given recent events, it is no longer appropriate for me to materially contribute to the Tor Project either financially, as I have so generously throughout the years, nor by providing computing resources," Green wrote. "Nonetheless, I feel that I have no reasonable choice left within the bounds of ethics, but to announce the discontinuation of all Tor-related services hosted on every system under my control. " It is unclear to what these recent events may be referring to, but the Tor Project has been rocked by a huge sexual misconduct scandal in the past two months. As a result, last week, the Tor Project Board of Directors decided to replace itself. Green may not trust the new Board of Directors, or is siding with Jacob Applebaum, an important figure who was forced to leave the Tor Project following some serious accusations. Below is Green's entire statement, and here is an email exchanged on the Tor mailing list by the Tor staff. 2016-07-18 11:40 Catalin Cimpanu

14 Website of Remote Admin App Compromised Over and Over Again to Spread Malware The first signs that something was wrong came to light last November , when ESET discovered that, in the months of October and November 2015, crooks had compromised the website and infected the Ammyy Admin installer with five different malware variants, not all at a time, but at different intervals. They first distributed the Lurk malware dropper, then the CoreBot infostealer, the Buhtrap banking trojan, the Ranbyus banking trojan, and the NetWire RAT. ESET informed the website's owners, who responded by saying they cleaned the website and removed the malicious versions of the Ammyy Admin installers that also contained malware. According to a new report released by Kaspersky today, the incident repeated in February 2016, when the company's experts detected the same website spreading malware-laced installers once again. This time around, the crooks used the Lurk trojan, a malware dropper that infects victims and then downloads other types of malware, at the crook's behest. Kaspersky informed the Ammyy Admin creators of their issues, and they said they fixed the compromised website. Kaspersky explained this happened three times in that month alone. The scenario repeated in April, when the website was once again compromised. The crooks used the Lurk trojan again, but this time around, the trojan activated only if the infected computer was part of a corporate network. Again, Kaspersky notified the website owners of their issue, who moved to clean the website, for the fourth time this year. Nevertheless, the same site kept getting compromised in the following months. After Russian authorities had announced they had managed to arrest the hackers behind the Lurk trojan , on June 1, the very same day of the announcement, the Ammyy Admin website switched from distributing the Lurk trojan to the Fareit infostealer. Again, Kaspersky notified the Ammyy Admin creators of their issue. At this point in time, seeing that the Ammyy Admin webmasters cannot secure their website even if their life depended on it, it may be a good idea to find an alternative to their software and stay away from their website. 2016-07-18 11:10 Catalin Cimpanu

15 Xiaomi’s Working on a Windows 10 Laptop to Make the MacBook Air Irrelevant Although it currently doesn’t have a name, the upcoming Xiaomi laptop will run Windows 10, and just by looking at it, it becomes obvious that it’s designed to compete against the MacBook Air. It will be offered in two versions, with either an 11- or 13-inch screen, according to WCCFTech , and will be powered by an Intel Core i7 6500U processor with a standard speed of 2.50 GHz and turbo speed of 3.10 GHz. Furthermore, the laptop will feature 8 GB of RAM, but it won’t come with a touch screen, so it won’t take advantage of Windows 10’s touch capabilities. This is one of the main drawbacks of the device, but since it’s positioned as a rival to the MacBook Air, such a feature isn’t entirely critical. A USB Type-C port will also be offered for quick charging and data transfers, but it’s not known yet if Xiaomi plans to go the same way as Apple and offer just one port on the whole device or some others might be available too. Right now, there’s still no release date for this new Xiaomi laptop, but it’s expected to debut later this year, with full information and pricing details to be announced soon. 2016-07-18 10:52 Bogdan Popa

16 Top 10 most read: V3 Technology Awards, Linus Torvalds' epic rant and Cat S60 review Summer may be here but the tech sector continues to churn out the news, and several stories hit the headlines on V3 last week. Top story of the week was Linus Torvalds' epic rant over the use of comments in the Linux developing community, in which he called for some semblance of common sense to ensure that people can keep track of changes being made. Meanwhile, V3 was in the headlines with the announcement that the V3 Technology Awards 2016 are now open for entries, with a closing date of 19 August. Entry is free and easy, so make sure you get your product or projects in for consideration before the deadline. Finally, we put the Cat S60 rugged handset through its paces to see how the specialist device stacks up and whether those working in tough environments may want to consider the device over a more precious iPhone or Samsung handset. Linus Torvalds goes on epic rant over Linux devs' comment syntax Not happy with current situation V3 Technology Awards 2016 open for entries Awards take place on 25 November Sainsbury's' DevOps 'revolution' cuts upfront capital costs and lead times for new services Benefits to be had, but they take planning Cat S60 review Just how tough is this rugged handset? HPE looking to sell software assets, according to reports More streamlining even after split NHS England announces top-level shake-up, including CIO appointment All change in top roles Rolls-Royce to use Microsoft IoT and analytics tools for jet engine predictive maintenance Another example of IoT in use at major organisation Leaked chats show Microsoft staff celebrating departure of Kevin Turner Staff not sorry to see executive leave Amazon previews Python microframework for 'serverless' applications on AWS Aims to meet evolving need of customers Satana 'ransomware from hell' prevents Windows PCs starting up Another threat to your machine 2016-07-18 10:40 www.v3

17 17 Try FocusWriter - the perfect tool for creative writing Distraction-free writing software is increasingly popular - searches have increased dramatically over the last year - and it's easy to see why. Social media notifications, email notifications and rolling news updates all compete for our attention. Unread message counts, often highlighted in red, are difficult to ignore and it's all too easy to spend an hour or more attending to other people's requests before getting down to actual work. FocusWriter is specifically designed to help creative writers put their ideas to paper - more a notebook than a text editor. It's not intended for the second and third drafts, when sections need to be moved or cut, paragraphs refined and chapters cross-referenced, which explains the omission of some features we've come to expect from word processing software. Choose your background carefully - there's no way to resize the page, so you'll be seeing a lot of it The program's interface is very clean, with only a central blank page eagerly awaiting your words. You can change the theme (including text colours and wallpaper) to something more inspiring than the default cheesy woodgrain, but make sure it's something you really like - there's no way to change the size of the paper in the centre, or make it occupy the whole screen, so you'll be looking at that background a lot. Menus and setting are accessed by moving your cursor to the edges of the screen. The top menu features a pared-back version of the usual text- editing options (alignment, special characters, bold/italic/underline, find and replace), as well as a few special tools to help keep your wordsmithing on track. You can set alarms to trigger after a certain period has elapsed, or at a particular time, which is very useful because your PC's clock is one of the many distractions FocusWriter blocks out. Stay on track by setting yourself targets - by wordcount, or by time worked You can also set yourself targets - either by time, or by word count. By meeting your targets, you can start a 'streak' - a simple but effective way to ensure you push yourself by gamifying the writing process. FocusWriter's killer tool is Focused Text, while fades out everything except the section you're currently typing - a whole paragraph, a block of three lines, or just the current line. As mentioned earlier, this is no use for editors, but for simply putting ideas to paper, it's brilliant. The Focused Text option fades out everything except the section you're working on right now, helping you concentrate on the here and now The optional typewriter sound effects are a less essential feature, but can be satisfying - particularly if your keyboard lacks the pleasant clack of mechanical switches. Alternatively, you might prefer to dig out your comfiest headphones, load Rainy Mood in the background and forget the physical action of typing entirely. The final ace up FocusWriter's sleeve is the portable option. Simply save the program to a USB stick and create a folder called 'Data' in the same folder as the EXE file. FocusWriter will store your documents and settings here. To start storing data on your PC again, simply rename the Data folder. It's a thoughtful addition, ideal if you use a desktop for prolonged writing sessions at home, and a laptop for cafe-based typing. FocusWriter can save your work in TXT, RTF or ODT formats, so it's ready to open in another program for editing. If you have an idea blossoming in your mind, FocusWriter gives it the room it needs to grow. Article continues below 2016-07-18 10:30 By Cat

18 Opera falls into Chinese hands Keys components of Opera Software are to be taken over by a Chinese business consortium. A planned $1.24 billion takeover of the entire operation fell through after failing to gain regulatory approval, but a new deal has been struck in its place. Instead, the consortium -- comprising Qihoo 360 Technology Co, Beijing Kunlun Tech Co and others -- will take over just a portion of Opera Software's consumer business for $600 million. With the desktop and mobile version of the Opera web browser now falling into Chinese hands, there will no doubt be concerns about potential privacy issues based on China's history. It is not entirely clear why the original deal fell through. Approval from both US and Chinese regulators was needed and Opera has only said that this was not won, without revealing which side failed to agree to it. As reported by Reuters , in addition to Opera's browsing arm, the consortium will also acquire the company's privacy and performance apps, its stake in nHorizon, and its technology licensing business. In a statement, Opera said: The deal, which has already been approved by Opera's board of directors, sees the company's advertising and marketing business, its TV operations, and game-related apps left untouched. Photo credit: Alexander Supertramp / Shutterstock 2016-07-18 10:26 By Mark

19 Older Brits like to shop on tablets Tablets might have a rough time ahead of them, but if you ask UK’s consumers, aged 55 and above, they’re quite nice to use for shopping. That’s according to a new report by Bronto Software, which says that twice as many people in this age group (22 percent) use tablets for shopping, compared to their US (11 percent) and Australian (11 percent) peers. The UK has more tablets (60 percent), compared to the US (57 percent) or Australia (54 percent), and Brits use it for shopping more frequently (34 percent) compared to these two countries (25 percent and 19 percent, respectively). What’s interesting, though, is that this is the only age group where tablets beat other mobile devices, such as smartphones, or laptops. "As the UK and US share a lot of history, culture and language, there is a misconception that consumer behavior across the Atlantic is similar to the UK", says Saima Alibhai, practice manager, Professional Services at Bronto Software. Men and women are equal when it comes to shopping via smartphone in the UK, where in the US, men are more likely to use these devices than women. More British female shoppers shop on their smartphones, than those in the US, or Australia. "However, UK retailers looking to engage consumers in the US need to be mindful that device preferences and ownership change as much between countries as between various demographics. Understanding the specifics and adjusting strategies accordingly will ensure that the shopping experience -- from browse to buy -- is tailored to the audience and drives results". The full report, entitled "How Consumers Across the Globe Use Multiple Devices to Shop and Buy", can be found on this link . Published under license from ITProPortal.com, a Net Communities Ltd Publication. All rights reserved. Photo Credit: Jim David / Shutterstock 2016-07-18 10:24 By Sead

20 Microsoft Band 2 now on sale for $144.99 Last week, Microsoft's Band 2 was available for as little as $144.99 from Amazon in the United States - but that deal was exclusively offered to those with an Amazon Prime subscription. Now, that requirement has been dropped, and the Band 2 is available at that price for all buyers, with or without Prime. However, as with last week's offer, only the small model is being sold at that price; the medium and large versions are both on sale for $174.99. At $144.99, the health- and fitness-focused wearable is discounted by $105 compared with its full price of $249.99 - although it's rarely been sold at that price in recent months, due to Microsoft's near-constant stream of 'limited- time offers' on the device. The Microsoft Store has been selling the Band 2 for $174.99 since May ; when that deal began, it was the fourth 30% discount on the device in just two months . It's a similar situation over in the UK, where Microsoft is offering the Band 2 with 25% off its full price, in another 'special offer' lasting over five months. The increasing availability of big discounts on the device has fuelled suspicions that the company is clearing stocks ahead of the unveiling of its successor. Source: Amazon 2016-07-18 10:24 Andy Weir

21 Opera's Chinese Takeover Fails as Purchase Price Gets Slashed in Half This new deal is worth only $600 million, half the initial price, and the Chinese group will be taking over the desktop browser division, the mobile browser division (including Operator Co- brand solutions), and the Opera Performance and Privacy Apps divisions. The group will receive control over Opera's 29.09 percent stake in the Chinese company nHorizon, in which Opera Software has invested, and they will also take control of Opera's technology licensing business outside of Opera TV. Not included in the deal is Opera's operations such as Opera Mediaworks, Opera Apps & Games (including Bemobi), and Opera TV, which will continue to remain under Opera Software's control. According to Opera , the remaining assets generated revenue of $467 million in 2015, with a profit of $74 million before taxes. On the other side, the sold assets generated revenue in 2015 of $149 million and profits of $34 million. The two parties expect to close this transaction and make the sale final in the second half of Q3 2016, pending further regulatory approval. The original deal that included a total sale of the company did not receive this regulatory approval. It is unknown if Chinese or US authorities were against the sale. Opera's board has already approved the new deal. Opera's CEO, Lars Boilesen, will serve as CEO for both companies until December 31, 2016. After that, he'll remain the CEO of the left-over Opera divisions. According to the deal, the "Opera" brand is included in the deal and the leftover Opera company will have to find a new name in 18 months after the deal finalizes. 2016-07-18 10:15 Catalin Cimpanu

22 The best free antivirus software 2016 The internet is a dangerous place. Leave your PC unprotected and, no matter how careful you are, it's only a matter of time before a malicious program slips through your defences. All it takes is one accidental visit to a compromised page, and you're in for a world of trouble. There's a wealth of free antivirus software there to protect you from such disasters, but not all such security packages are created equal. We've rounded the best free downloads to keep you safe online, looking at their effectiveness, speed and ease of use. Have we missed your favourite? Let us know in the comments below. Avira is fast, effective, and doesn't bombard you with pop-ups and alerts Fancy benefiting from the experience of 477 million people? That's the number of times Avira Free Antivirus has been installed, and its cloud- based security pools their history of virus exposure: if a user on the other side of the planet encounters a new nasty, all of Avira's users get the ability to detect it. It's fast, friendly, doesn't give you too many prompts and alerts, can identify unwanted apps inside legitimate applications and most importantly of all, achieved 100 per cent malware detection in tests - a score shared with BitDefender and Kaspersky, but significantly ahead of many of its rivals. It also comes with an optional browser extension that can prevent sites from tracking you online. This impressive set of functions makes Avira Free Antivirus our top-rated free security program. Read on to discover eight more of the best free antivirus tools available right now. Have we missed your favorite? Let us know in the comments below. Bitdefender Antivirus Free Edition works quietly and unobtrusively in the background Some antivirus programs want to know all about you and pop up excitedly to tell you about new updates or every potential threat they've stopped. Not BitDefender Antivirus Free Edition. It's the ninja of antivirus software, lurking silently until it needs to unleash its skills. It's ad-free and nag-free, maintenance-free and lightweight in operation: it doesn't slow down your PC or interrupt your working day for no good reason. If you like to fiddle then you might find other apps more to your liking, but for those of us who prefer a little peace and quiet BitDefender is hard to beat. AVG Antivirus Free lets you perform virus scans on your PC from your phone We've been using AVG AntiVirus Free for years, and it's a very good antivirus program - albeit a little too keen to inform us about things we don't really need to know. The dashboard is user-friendly, the scanner checks links as well as downloaded files, and there's a clever tool that enables you to scan your PC from your mobile phone if you think it's become infected. The paid-for versions attempt to lure you with promises even more robust download protection, data encryption and firewalls to make online threats redundant, but the standard free edition of AVG offers a great deal of protection for no money. Avast Free Anti-Virus 2016 offers some great online security tools in addition to virus-scanning Avast Free Anti-Virus 2016 is an impressive offering. It protects against malware and viruses, can identify weaknesses in your home Wi-Fi and includes a well-implemented password manager to keep your complex login details safe and easy to access. As with most free products you get even more features if you're willing to pay - there's no anti-phishing protection or spam filtering in Avast Free - but it's nonetheless perfectly respectable and keeps you safe from the most common online attacks. Ad-Aware Free Antivirus+ has evolved from a malware protection tool into a full security suite As the name suggests, Ad-Aware Free Antivirus+ started life as an anti- adware program, and it was a very good one. Over the years it's evolved to address more threats, so today it's a fully-fledged security suite that scans downloaded files, keeps an eye out for spyware, enables you to compare web addresses against known offenders and won't interrupt you when you're playing a game. It doesn't have the features of its paid-for siblings - if you want anti-phishing email protection, parental controls and file destruction you'll need to pay for it - but the core product covers the essentials very well. 360 Total Security 2016 combines four security tools in one handy free package Why take one virus detection engine onto the internet when you can take four? 360 Total Security 2016 from Qihoo combines Avira, BitDefender, 360 Cloud Scan Engine and 360 QVMII AI Engine, so in theory it should probably detect threats before they've even been invented, let alone released. But can you trust it? In some circles Qihoo is seen as the Volkswagen of security software, tweaking its programs to gain artificially inflated scores in tests - and the BitDefender engine is switched by default, so you aren't getting quite the protection you might think until you adjust some settings. It's powerful if a little slow, but lingering trust issues aren't great when you're deciding which firm can keep your stuff safe. Panda 2016 Free Antivirus uses cloud servers to take the load off your PC Panda Free Antivirus has headed for the clouds with this version, which does all the heavy lifting on Panda's servers to reduce the load on your PC. As we've come to expect from free programs there are lots of optional features that you only get if you pay - Wi-Fi scanning, password management, file encryption and parental controls - but the basics are covered well and Panda scores well in independent virus detection tests. It isn't quite up there with Avast, AVG or BitDefender, but it's still very good. Windows Defender is built into Windows 8 and 10 Who better to protect your PC than the company whose operating system facilitated all the security issues in the first place? We kid, of course - Windows is more secure than it's ever been, and Microsoft has invested massive sums in system protection in the form of Microsoft Security Essentials for Windows 7 and Vista, and Windows Defender, which is built into Windows 8 and 10. The result, as you'd expect, integrates very well with Windows, but it consistently lagged behind the likes of BitDefender in independent testing. While the 2015 version was pretty hopeless in detection tests, the 2016 one is a vast improvement - but Microsoft is still on the back foot and rival programs offer better protection. It's much, much better than nothing, but it's not the best. Many banks offer online security tools for account-holders To keep your account details safe online, many banks offer free licences for antivirus software you'd normally expect to pay for. McAfee subscriptions are free to MBNA card and HSBC bank customers, saving you £40 (about US$57, AU$79), while users of Barclays online banking can get a free 12-month subscription to Kaspersky Internet Security worth £30 (about US$43, AU$59). Article continues below 2016-07-18 10:13 By Gary

23 Samsung Galaxy Note 7 with Always-On Display Option Pops Up in Picture The Galaxy Note 7 has leaked in many images, with one of the most recent showing all color variants of the upcoming smartphone, courtesy of Weibo. The picture hasn't revealed anything new as we already knew from leaked press renders that the device would come in several color variants, including Blue Coral, Silver Titanium, and Black Onyx. Now, the latest image shows the Always-on display option on the Note 7, according to TechTastic. The feature seems similar to the one found on Galaxy S7 and S7 edge, but it might come with some redesigned icons, considering that the Note 7 will have a new version of TouchWiz UI, Grace UX. Out of all leaked images, only the press renders provided a view the phone's back, which is expected to be covered by a back panel. In addition, the smartphone is said to come with USB Type-C port, an iris scanner, and some really powerful specs. Many rumors on the phone's specifications have surfaced, and some of them are even contradictory. Samsung is said to feature 6GB of RAM on the smartphone, but benchmark tests and listings on various websites have pointed in the direction of 4GB of RAM. In addition, the handset could come with Snapdragon 821 processor, but it's a bit unclear if the screen size will reach 5.7 or 5.8 inches. Still, the phone is said to come with a redesigned S Pen stylus. Samsung will unveil the Galaxy Note 7 on August 2, about two weeks from now, and pre-orders are said to start immediately after the device's revealing. 2016-07-18 10:05 Alexandra Vaidos

24 80,000 Users (and Counting) Want Pokemon Go on Windows Phone Despite rumors that Microsoft itself might be looking into ways to bring Pokemon Go on Windows Phone, users on the platform can’t do much about it, and their only way to show how much they want the game on their devices is to sign this petition. The number of Windows Phone users who are signing this petition is growing at an impressive pace and is very close to reaching 80,000, despite the fact that, by July 7 (which is only 11 days ago), only 4,000 people showed their support for a WP version of the game. Unfortunately, despite the immense support that this petition is gaining and the huge number of users that are asking for Pokemon Go to launch on Windows Phone, Niantic still doesn’t seem to be interested in porting the game to Microsoft’s mobile operating system. 2016-07-18 09:47 Bogdan Popa

25 Stampedo ransomware available for just $39 A new variant of ransomware has been found for sale on the dark web for an incredibly low price that allows its victims 96 hours to pay a fee. This new piece of ransomware is called Stampedo and it is available for only $39 which includes a lifetime license. Once it has infected a user’s system, a fee must be paid within the allotted time in order to regain access. If a user fails to pay the fee, Stampedo begins to delete random files on their computer within six hour intervals. What sets this malware apart is that it is more threatening since it actually deletes files as opposed to just encrypting them and making them inaccessible from an infected device. Stampedo is also able to do the work necessary to spread the virus without the need for administrator rights. Ransomware has become a big problem globally in recent years and has affected businesses and individuals. The necessary cost of unlocking a device that has been infected varies and individuals could end up spending a few hundred dollars, while businesses are not so lucky and often have to pay a fee in the thousands of dollars range. Heimdal Security further explains the intricacies of Stampedo, saying: "Cryptoware is such a big segment of the malware economy, malware creators have to constantly release new products to keep their clients engaged and the money flowing". Ransomware attacks have doubled in the past year according to the FBI. It estimates losses associated with this form of malware were around $24 million during that time period. Published under license from ITProPortal.com, a Net Communities Ltd Publication. All rights reserved. Photo Credit: beeboys / Shutterstock 2016-07-18 09:31 By Anthony

26 Microsoft Releases Patch to Block Linux from Running on the Original Surface RT Windows RT, however, failed to become a hit, and Microsoft was one of the few companies that actually supported it, so despite the obvious low adoption, Redmond even launched a second Surface running the same operating system one year later. And yet, Windows RT devices have remained rather outdated, even though Microsoft keeps rolling out security patches for these tablets. And one of these security patches released earlier this month comes to fix a vulnerability that nobody ever knew about. The Register claims that MS16-094 fixes a loophole that allowed users to install other operating systems on Windows RT devices, including here Linux. With Windows RT becoming an OS with no future, many have looked into ways to install a different operating system on the Surface RT, but most attempts failed because of the locked bootloader and the other security systems that Microsoft put in place. But it turns out that there was actually a way to do that, only that nobody knew about it. And with this recent patch, Microsoft has fixed it anyway, so there’s practically no other method of installing Linux or a different OS on the Surface RT anymore. According to Microsoft’s official bulletin page, this is what the new patch does on your Windows RT device: “A security feature bypass vulnerability exists when Windows Secure Boot improperly applies an affected policy. An attacker who successfully exploited this vulnerability could disable code integrity checks, allowing test- signed executables and drivers to be loaded on a target device. In addition, an attacker could bypass the Secure Boot Integrity Validation for BitLocker and the Device Encryption security features.” As a Surface RT owner, I must admit that a different OS version might actually bring the device to life, especially because Microsoft is no longer improving the platform, but only patching security issues. Linux seems to be the only choice for the time being, and we’re pretty sure that the dev community will once again start looking into this now that it’s known that a security vulnerability allowed the installation of different operating systems. 2016-07-18 09:18 Bogdan Popa

27 Android 6.0.1 Marshmallow now available for Samsung's Galaxy Tab S2 on AT&T It's been nine months since Google originally released the Marshmallow update , but so far, it's made its way to just 13.3% of active Android devices - and the next major version of the OS is fast approaching . But Marshmallow continues to slowly roll out to more devices around the world, including many of those from Samsung in recent weeks. Now, it's available for the Galaxy Tab S2 (T817A) on AT&T. As Android Police notes, it seems the carrier actually began the rollout on July 8, but didn't get around to updating its support site with information about the update until this past weekend. Weighing at 1074MB, the update bumps the OS version up to Android 6.0.1, and along with Google's Marshmallow improvements, and Samsung's various additions - such as its TouchWiz interface - AT&T has also included NumberSync support: AT&T's Marshmallow update for the Galaxy Tab S2 follows that of Verizon, which began upgrading the device over six weeks ago. Source: AT&T via Android Police 2016-07-18 09:14 Andy Weir

28 Acer Windows 10 Mobile flagship goes on sale in US for $649, including dock, mouse, keyboard At the beginning of September 2015, Acer unveiled the Liquid Jade Primo , its new Windows 10 Mobile flagship phone. It referred to the device as 'your pocket PC', since the high-end handset supports the PC-like Continuum experience , allowing owners to use a mouse and keyboard with the device when connected to a larger display. Ten and a half months later, the Liquid Jade Primo is finally available to buy in the US, via the Microsoft Store . The unlocked device is priced at $649 - that's $100 more than Microsoft's Lumia 950 , with which it shares many similar specs - but that price includes more than just the handset itself. In addition to the device, you'll also get: The Liquid Jade Primo's key specs include: In April, Acer also announced the Liquid Jade Primo Premium Pack, which includes the dock, mouse and keyboard, along with a monitor. It also unveiled the Liquid Extend, a notebook-style device to which the Jade Primo can be connected, offering a Continuum experience on the go. It's not yet clear if either of these options will be offered in the US. For buyers with more modest budgets, Acer also launched a new entry-level Windows 10 Mobile handset in the US back in April. The Liquid M330 is priced at just $99 unlocked. Be sure to check out our hands-on with the Liquid Jade Primo from earlier this year. Source: Microsoft Store via @Nokiapoweruser 2016-07-18 08:58 Andy Weir

29 Microsoft Brings iOS Exclusive App to Android, Windows Phone Version Uncertain Microsoft Flow , an automated workflow management solution, was released on iOS a few weeks ago, and now the company has published the very first beta version for Android users, planning to test it before releasing it publicly in the Google Play Store. For the moment, however, Flow is still in its early days on Android, so you won’t get the same functionality as on iOS. You won’t get features to start new Flows from the app, and the only workaround for the time being is to head to the official website to do it. On the other hand, you can keep an eye on all your Flows and get notifications whenever new information is available. The beta version of the Flow app is already published in the Google Play Store , but you need to register to become a beta tester for Microsoft in order to download it. There’s still no mention as to when it could go live for all users, so for the moment, the beta flavor is all we get. As far as users with Windows phones are concerned, Microsoft doesn’t seem to be too interested in bringing Flow on its very own mobile operating system. The company’s Research and Garage units are focusing more on Android and iOS apps lately, and although this might not be the best way to support its own platform, Microsoft seems very keen on investing in non- Windows services. 2016-07-18 08:45 Bogdan Popa

30 Moto Z Play Leaks in Image Showing USB Type-C Port The Moto Z Play will be the less powerful than the other two, but it will still support MotoMods and thus get additional functionality from the modular accessories. The smartphone has been spotted in renders, on import export listing website Zauba , and even in a benchmark test on AnTuTu. While previous leaks did provide information on the upcoming handset, an image has recently surfaced on +hellomotoHK and confirmed that the Moto Z Play will be coming with a 3.5mm headphone jack, as well as USB Type-C port. It's worth noting that the Moto Z and Moto Z Force don't have USB Type-C ports. Still, it seems fitting for Motorola to incorporate such a port on one of its smartphones in the premium series, considering that many upcoming devices will pack such a port, including Samsung's Galaxy Note 7. Moto Z Play is believed to come with a 5.5-inch Full HD AMOLED display and run an octa-core Snapdragon 625 processor coupled with Adreno 506 graphics processing unit. By comparison, the high-end smartphone in the Moto Z series, the Moto Z Force, boasts a Snapdragon 820, but the same screen size and resolution. Moto Z Play could come in two variants, one with 3GB of RAM and 32GB of storage while the other should have 2GB of RAM and 16GB of storage. In addition, the smartphone will have a fingerprint sensor and a 16MP rear camera with dual-tone LED flash, PDAF, and OIS. Moreover, it will draw power from a 3,500mAh battery and run Android Marshmallow 6.0. At this point, there's no word on when Motorola will finally unveil the Moto Z Play, but more information should surface soon. 2016-07-18 08:25 Alexandra Vaidos

31 31 EU Data Protection Law May End The Unknowable Algorithm Europe's data protection rules have established a "right to be forgotten," to the consternation of technology companies like Google that have built businesses on computational memory. The rules also outline a "right to explanation," by which people can seek clarification about algorithmic decision that affect them. In a paper published last month, Bryce Goodman, Clarendon Scholar at the Oxford Internet Institute, and Seth Flaxman, a post-doctoral researcher in Oxford's Department of Statistics, describe the challenges this right poses to businesses and the opportunities it presents to machine learning researchers to design algorithms that are open to evaluation and scrutiny. The rationale for requiring companies to explain their algorithms is to avoid unlawful discrimination. In his 2015 book The Black Box Society , University of Maryland law professor Frank Pasquale describes the problem with opaque programming. "Credit raters, search engines, major banks, and the TSA take in data about us and convert it into scores, rankings, risk calculations, and watch lists with vitally important consequences," Pasquale wrote. "But the proprietary algorithms by which they do so are immune from scrutiny. " Several academic studies have already explored the potential for algorithmic discrimination. A 2015 study by researchers at Carnegie Mellon University, for example, found that Google showed ads for high income jobs to men more frequently than to women. That's not to say Google did so intentionally. But as other researchers have suggested, algorithmic discrimination can be an unintended consequence of reliance on inaccurate or biased data. Google did not immediately respond to a request to discuss whether it changed its advertising algorithm in response to the research findings. A 2014 paper from the Data & Society Research Institute echoes the finding that inappropriate algorithmic bias tends to be inadvertent. It states, "Although most companies do not intentionally engage in discriminatory hiring practices (particularly on the basis of protected classes), their reliance on automated systems, algorithms, and existing networks systematically benefits some at the expense of others, often without employers even recognizing the biases of such mechanisms. " Between Europe's General Data Protection Rules (GDPR) , which are scheduled to take effect in 2018 and existing regulations, companies would do well to pay more attention to the way they implement algorithms and machine learning. But adhering to the rules won't necessarily be easy, according to Goodman and Flaxman. They note that excluding sensitive data having to do with race or religion, for example, doesn't necessarily mean algorithms will return non- biased results. That's because other non-sensitive data points, like geographic area of residence, may have some correlation with sensitive data. What's more, the researchers observe that many large data sets are the product of multiple smaller data sets, making it difficult if not impossible for organizations to vouch for the integrity, accuracy, and neutrality in their data. "The GDPR thus presents us with a dilemma with two horns: Under one interpretation the non-discrimination requirement is ineffective, under the other it is infeasible," write Goodman and Flaxman. In a phone interview, Lokke Moerel, senior of counsel at Morrison & Foerster, said the provision on automated decision making is not new. Also under the current Directive (the data rules that apply to criminal matters), companies have to inform individuals about the underlying logic involved in their automated decisions. [See 8 Ways To Secure Data During US-EU Privacy Fight .] Moerel acknowledged the difficulties of the rules, noting that in an era where algorithms are dynamic and self-learning, it's very difficult to know how an algorithm made a decision at any point in time, let alone communicate this to an individual in a meaningful manner. If logic is incomprehensible to the vast majority of people, the question becomes: What is the added value of providing this information in the first place? Moerel said she found it troubling that algorithms can end up being discriminatory through data correlation. As an example, she noted that an insurance company charging higher premiums in a certain region because of higher accident rates could end up discriminating against a specific ethnic group that happens to live in that area. She also suggested there's a risk that companies may try to hide such discriminatory correlations by performing further analytics and finding other non-sensitive correlations that they know are correlated with the sensitive data. Requiring the disclosure of algorithmic logic guards against such action, she said. In order to avoid being questioned about algorithmic logic, Moerel suggested companies give individuals affected by their decisions more control over the implications of how data are used (e.g., by giving them control over their ad preferences, whereby they can view and adjust the indicators that triggered the relevant advertisement for the visitor). "It will help to avoid individuals questioning your logic if you give them control of the triggers that matter to them," she said. "If people are looking at a black box, it won't be acceptable for European regulators. " Goodman and Flaxman say that work is already underway to make algorithms more easily subject to inspection. And they remain optimistic that technical code can coexist with the legal code. "We believe that, properly applied, algorithms can not only make more accurate predictions, but offer increased transparency and fairness over their human counterparts," they conclude. (Cover Image: mattjeacock/iStockphoto) 2016-07-18 08:06 Thomas Claburn

32 Watch Ben Heck tear down the ultra-rare Nintendo PlayStation prototype A little over a year ago, the retro gaming community was treated to the discovery of an ultra-rare Nintendo PlayStation. The unreleased machine, the result of a partnership between Nintendo and Sony that unfortunately went sour, was proven to be legitimate a few months later although the optical disc drive wasn’t functional. That didn’t stop the homebrew community from creating the first-ever game for the prototype – Super Boss Gaiden – but the mystery surrounding the optical drive remains as it would be amazing to see everything up and running as intended. That’s where console modder Ben Heck comes in. The owners of the prototype recently got in touch with Heck and arranged to have him conduct a teardown. At some point between late last year and now, something broke as the machine is no longer putting out a video signal or sound. Found is a TechSpot feature where we share clever, funny or otherwise interesting stuff from around the web. 2016-07-18 07:30 Shawn Knight

33 Microsoft Develops Intelligent Camera App for Apple’s iPhone The latest such app is called Microsoft Pix and is described as an intelligent camera app for the iPhone, offering a set of features that are supposed to help you take the best shot at any given moment. It happened to each and every one of us to try to take a photo of something but miss the right moment because it took too long to unlock the phone, open the camera app and then focus on the subject, so Microsoft’s new Pix app for the iPhone comes to eliminate all of these. With Microsoft Pix, the app automatically shoots photos in burst both before and after you press the shutter button, in an attempt to help you capture the perfect moment. Furthermore, the app adjusts ISO and exposure from the second you launch the app, and the focus is always on the faces of people who are part of the photo. The app also creates a live image all automatically, and the app’s description comes to explain how this is going to work. “When Microsoft Pix detects interesting motion in your shot, it automatically stitches together the burst frames into short lopping video called a Live Image. When people are detected, it optimizes the Live Image for their faces, stabilizing the video around them. Unlike Apple Live Photos, it only creates Live Images when it senses interesting movement, so it won’t waste storage and doesn’t need to be turned on/off in advance.” The app is created by Microsoft Research, and the download link doesn’t seem to be up just yet, but iPhone users should be able to get it from the store in the coming days. 2016-07-18 07:10 Bogdan Popa

34 Nokia to Release Two Android Smartphones with Snapdragon 820 The report shows that the two devices will have the Nokia brand on the case but will be developed by HMD Global. Back in May, HMD Global announced that it bought part of the smartphone division from Microsoft Mobile, the company that integrated Nokia's phone business after the 2014 acquisition. The purchase made it possible for HMD Global to develop smartphones under the Nokia brand. The two smartphones leaked in pictures, and it seems that they will come in different sizes, according to Nokiapoweruser. The smaller model will have a 5.2-inch display while the larger one will sport a 5.5-inch screen. Both handsets will have a 2k display resolution and come with Z-Launcher System UI, which is based on the latest Android Nougat 7.0. The latest Android OS version should be released in the coming months. Regarding specs, Snapdragon 820 chipset will be featured on both smartphones, although the latest processor model from Qualcomm, Snapdragon 821, has been released just recently. Nevertheless, the Android-based Nokia smartphones are expected to come with a full-metal body and have IP68 certification, which means the devices are protected against water and dust. Rumors also point into the direction of a 22.6MP rear camera on one of the smartphones, which could actually be called the Nokia P1. The two devices could launch in late 2016 or early next year, so more details on the upcoming Nokia smartphones running Android OS are bound to surface in the coming weeks. 2016-07-18 07:05 Alexandra Vaidos

35 Mandelbrot Fractal is a pure JavaScript fractal explorer Mandelbrot Fractal is an open-source fractal generator with a difference: its spectacular images are produced using pure JavaScript, no external libraries or other oddball dependencies involved. This makes for a very simple structure, essentially just an index.html with a supporting app.css and two .js files. The controls are straightforward, too, working much like any other Mandelbrot explorer you’ve ever used: click Launch to generate the starting image, then click any point on the screen to zoom in and redraw. If you’ve found something interesting, right-clicking displays an option to save the current view as an image. There are also hotkeys to reset the starting view, or close the program entirely. Check out the code and you’ll find the package is smarter than it looks, for example detecting and adapting its controls to a touch interface. This is still a very basic fractal explorer, of course, but it looks good, is easy to use, and you can freely amend and use it on your own website (check out the online version here ). Mandelbrot Fractal is an open-source application which should run in any modern browser on any platform. 2016-07-18 06:53 By Mike

36 Windows 10 Redstone 1 to Launch in Waves, Not Everyone Will Get It on August 2 The rollout of Windows 10 Anniversary Update will take place in stages, so not everyone will receive the final build on August 2. Dona Sarkar, the head of the Windows Insider program, has explained in the latest edition of Windows Weekly (via Neowin ) that Microsoft expects some users to be disappointed that they’re not getting the Anniversary Update on August 2, but everyone needs to know that the whole process takes place gradually around the world. So while some devices will receive the Anniversary Update on August 2, others won’t, and the only option in this case is simply to wait. “It's going to take some time. We'll start with PC and phone, and it's going to be a global rollout. It's going to take time. Everyone's going to freak out wondering ‘where's my update?,’ ‘is it time yet?’ and ‘it didn't come.’ So it's going to take a little while to roll out to everybody,” Dona says. Insiders will be the first to receive the Windows 10 Anniversary Update, but for the moment, Microsoft hasn’t said anything about the final build version or the date when it will be released. What we know, however, is that we’re in the final development stages of the Anniversary Update, and if everything goes according to plan, RTM should be compiled in the next couple of weeks. Without a doubt, there will be users disappointed with the fact that the release of Windows 10 takes place in stages, but this kind of makes sense for Microsoft, given that more than 350 million devices are already running the operating system. The Anniversary Update will be released for PCs and mobile phones powered by Windows 10. 2016-07-18 06:03 Bogdan Popa

37 Success of Pokemon GO adds impetus for change at Nintendo The phenomenal success of Pokemon GO and the surge in Nintendo Co 's market value by $17 billion in just over a week has been seized upon by one of its most vocal investors to press for a change of strategy at the company. Until Pokemon GO, a mobile game, was launched just over a week ago, Nintendo had taken every opportunity to say its main focus was still gaming consoles, and games for smartphones were just a means to lure more people to them. But the success of Pokemon GO - unforseen even by its creators - has shown the potential for augmented reality and for Nintendo to capitalise on a line-up of popular characters ranging from Zelda to Super Mario. Seth Fischer, founder and chief investment officer at Oasis Management, is one of Asia's best known hedge fund managers and has long been a small but loud shareholder. Encouraged by the success of mobile games like "Candy Crush", he has campaigned for years for the Japanese console maker to develop and sell games for platforms run by Apple and Google. "I hope they will now understand the power of smartphones," Fischer told Reuters. "And as a result, I hope this means there is a whole change in strategy. " "My next focus with Nintendo is for them to focus on monetizing the rest of their 4,000 patents for mobile gaming, multi-player gaming, et cetera. I think they could be making 30 to 60 billion yen ($290 million to $570 million) annually from licensing. " Fischer has described Oasis as an adviser to entities that own Nintendo shares and a shareholder. The fund's direct holding is not listed among the company's largest investors. Nintendo President Satoru Iwata last year cautioned against hoping for too much change at the company. The expansion into smartphone games was "not because we have lost our enthusiasm or prospects for the console business", he said at the time. A Nintendo spokesman, asked about its mobile strategy, said last week there were three main objectives: "To maximize exposure of Nintendo's intellectual properties to consumers, to make profits on mobile devices, and to create synergies with the console business. " He did not comment further on Pokemon GO. Serkan Toto, founder of Tokyo-based game industry consultancy Kantan Games, said Nintendo still saw itself as a console maker. "When you sell $400 dedicated devices and you sell the gamer boxed software for $60 a piece - for them this is the gold standard," he said. "For them, mobile is the junk food: enjoy while you wait for the bus. It's not something that Nintendo sees for itself. " Pokemon GO, however, has been a runaway success, marrying a classic 20-year old franchise with augmented reality. Players walk around their neighborhoods in real life, search out and capture Pokemon cartoon characters on their smartphones. The game was created by Nintendo, Google-spinoff Niantic, and Pokemon Company. Nintendo owns a third of Pokemon Company and both have undisclosed stakes in Niantic. Nintendo has not commented on next steps, which many speculate could now involve other favorite characters. The hardware-focused group had planned to introduce a device called Pokemon GO Plus, which could allow it to piggyback on the success of the mobile game. The device vibrates when a Pokemon character is nearby, enabling players to catch them without constant monitoring. Pokemon GO is on track to be the first mobile game to break the $4 billion- per-year wall, beating out Candy Crush Saga and Supercell's Clash of Clans, according to Macquarie Research. But the impact to Nintendo's bottom line could be minimal because of shared ownership, as little as 3 percent of net profit in the year to next March. Niantic declined to comment on the future of its relationship with Nintendo, although it credited Pokemon's unique appeal for the game's success. "It's been wonderful to be able to combine our philosophy for these kinds of games with the powerful affinity that people have for Pokemon," Niantic CEO John Hanke told Reuters. But analysts say the craze signals the vast money-making opportunities available for Kyoto-based Nintendo - when it eventually brings out more serious hits. "Over the last decade they never compromised on the software side. That's why they'll blow everybody out of the water once they start take iOS and Android more seriously than they do now," Toto said. "The successes of Pokemon Go will open the eyes of executives in Kyoto. This is unprecedented. " There are no signs, however, that will happen soon. Of the four mobile games that Nintendo has promised to launch this financial year through March, two are set to be Animal Crossing and Fire Emblem - no sign of Mario nor Donkey Kong, at least not yet. 2016-07-18 03:02 CNBC

38 Health startup Lifesum raises $10M round led by Nokia Growth Partners What do you get if you combine the broad trends of smartphones, wearables, Internet of Things, an individual desire for control and healthcare costs for society? You get VCs investing in health-tech startups, that’s what. And the latest evidence of this is Stockholm-based Lifesum raising a $10m funding round led by Nokia Growth Partners (NGP), with Draper Esprit, Bauer Media Group and SparkLabs Global Ventures. Lifesum, which tracks what you eat and your exercise, says it now has 15 million users. That’s less than the 80 million users which MyFitnessPal had when it was acquired by athletic apparel maker Under Armour in February 2015. But Lifesum is aiming at doing more that tracking what you had for breakfast. Lifesum is now seeking partnerships with organisations in other sectors, including food, fitness, healthcare, DNA and pharmaceuticals. So for instance, this year Lifesum launched a partnership with food and juice bar Crussh in the UK which saw it provide user data which revealed the nutritional deficiencies in London, which was then used to create tailor- made juices to provide the nutrients that locals were lacking. Henrik Torstensson, CEO of Lifesum, says the cash will be used to expand globally especially in the US: “I especially consider the NGP Silicon Valley office to be a resource that we can start leveraging immediately since the US is becoming more and more significant to Lifesum.” NGP is invested in GetYourGuide, language-learning app Babbel, mobile game and app developer MAG Interactive, and company WorkFusion, amongst others. Walter Masalin, who leads NGP’s investment in Lifesum, says: “Digital health is still at an early stage, but with the current exponential development in mobile technology and IOT, we will soon see rapidly increased empowerment of the individual.” Vishal Gulati, Partner at Draper Esprit, comments that they were attracted by the “digital technology hotbed of Scandinavia.” Lifesum is the market-leading health app in Scandinavia, Germany, France, Italy, France and Russia and has been featured in Apple Keynote events, and as one of 40 apps to launch with the Apple Watch, as well as being a launch partner for Google Fit, and S2. 2016-07-18 00:00 Mike Butcher

39 UK government warned to act fast on Brexit risks A UK parliamentary committee looking at issues pertaining to the digital economy has warned of multiple risks in the wake of last month’s referendum vote to leave the European Union. The committee is also urging the government to set out key objectives for regulating what it dubs “disruptive change” — urging a focus on promoting productivity, innovation, and customer choice and protection, and suggesting that users of online platforms should be more involved in solutions to improve compliance with existing regulations. Brexit risks In a report published today, the Business Innovation and Skills committee urges the government to act quickly in the wake of the referendum vote — citing multiple risks to the domestic tech industry including ongoing access to skills and the risk of a post-Brexit brain drain; the UK’s fintech sector dominance being eroded if it loses access to the single market of financial regulation; investor confidence ebbing and businesses relocating to other European countries if the UK is outside the European digital single market. “The digital sector relies on skilled workforce from the European Union, and those individuals’ rights to remain in the country must be addressed, and at the earliest opportunity,” the committee notes in its report. “The Government needs to provide clarity surrounding skills, post referendum, otherwise skills and talent will be lost to other countries,” it warns. “We could have led on the Digital Single Market, but instead we will be having to follow,” the committee writes, recommending that the government address the issue of whether businesses will be able to access the European Single Digital Market, “if they want to do so”. “The Government must address this situation as soon as possible, to stop investor confidence further draining away, with firms relocating into other countries in Europe, to take advantage of the Digital Single Market,” it adds. The committee also urges the government to explain how its forthcoming Digital Strategy will be affected by the referendum result, and suggests the document sets out a list of “specific, current EU negotiations relating to the digital economy”. The Digital Strategy has been delayed for more than six months, with the government also putting out a call for public and industry contributions in December — a move the committee queries, asking the government to explain why it did this. Following that call the minister for digital issues, Ed Vaizey, said the government would delay the publication of the strategy until after the EU referendum vote. In the event, Vaizey himself is now a casualty of Brexit, with the new Prime Minister, Theresa May, reshuffling her top team and replacing him with Matt Hancock. The committee is also asking for clarity on how the strategy has changed since it was originally drafted. (It previously took evidence from Vaizey on this but appears to have been given only a rather sparse explanation of the strategy’s contents at that point.) “While the Government is supporting the digital economy, including support of Innovate UK, Tech City and Tech North, there is no overall strategy for this support,” the committee writes. “We hope that the digital strategy will provide an overview of present and future Government policy on the digital economy, which will be published as soon as possible, and in its reply the Government must provide us with an update of any changes made to the strategy since it was originally written.” “The Digital Strategy must address head on the status of digitally-skilled workers from the European Union who currently work in the UK. The digital sector relies on skilled workforce from the European Union, and those individuals’ rights to remain in the country must be addressed, and at the earliest opportunity,” it adds. However, given the ongoing fallout from Brexit, it seems safe to expect further delays to the publication of the government’s Digital Strategy. After all, the new minister for digital issues has only been in post since Friday. Regulating disruption On the challenge of regulating disruptive businesses, the committee writes that public policy needs to be future proofed “as far as possible, to ensure that the need for constant regulatory reform is minimised”. Specifically it recommends regulation based on “agreed principles”, with a focus on consumer interests of quality, choice, cost and safety. The committee notes the UK government has generally, up to now, taken a hands-off approach to regulating tech platforms — an approach it says it supports, noting this is also in line with an earlier House of Lords report on online platforms (which judged no new regulation is needed to govern the operation of online platforms). It further suggests that regulation to ensure “reasonable protection” for workers supplying labour to online platforms should be “either given or offered ” (emphasis theirs) — suggesting tacit support for some form of opt- out options for employment rights where there is operational pressure stemming from online platform business models. The committee’s preferred phrase here is “reasonable employment conditions”. “We agree that regulation should ensure that reasonable protection is either given or offered to individuals working in or using business models based on digital or disruptive technologies,” it write, adding: “It is right, for example, that customers have clear evidence and reassurance that Uber drivers and their cars have been checked fully, and that accommodation booked through Airbnb has adequate insurance.” On regulatory compliance, the committee suggests online platform feedback mechanisms could be explored as a route for ensuring businesses and their users comply with existing regulations — noting for example that some Airbnb hosts are flouting planning rules in London that restrict the renting of homes for more than 90 days per year, and also pointing to the problem with professional landlords operating multiple properties on Airbnb. It also believes this route could help regulate conditions for workers supplying labour to online platforms. “A more collaborative approach to regulation, involving users, should be explored by the Government. Digital platforms (the software or hardware of a site) could themselves become key players in the regulatory framework, required to ensure that users are complying with current regulations, and that workers using the platforms have reasonable employment conditions and are not vulnerable to exploitation,” says the committee. “We believe that platforms should have greater responsibility in ensuring that regulatory requirements are adhered to. Given that they have the technology at their disposal, this should not be an onerous responsibility,” it adds. 2016-07-18 00:00 Natasha Lomas

40 MarketInvoice, the UK invoice finance platform, raises another £7.2M MarketInvoice , which plays in the peer-to-peer lending space by enabling U. K. businesses to raise money from institutional investors and high net worth individuals by ‘selling’ outstanding invoices, has raised a further £7.2 million. The round is led by MCI Capital, a listed Polish private equity group, via its MCI. TechVentures Fund. Existing backer Northzone also participated, while the new funding brings total investment in MarketInvoice to just over £18 million. MarketInvoice says the additional capital will be used to consolidate its claimed position as the leader in invoice financing in the U. K., and for product development. It’s also talking up two new hires, adding to its 100-strong team based in London’s Shoreditch. Lisa Gervis (formerly of Sequoia-backed Elevate Credit and American Express) has joined as Chief Marketing Officer, and Rupert Thorp (formerly of Experian and Sky IQ) is the startup’s new Director of Sales. MarketInvoice’s invoice finance offering works as follows: invoice buyers — such as hedge funds, asset managers, family Offices, and high net worth individuals — bid in a real-time auction to determine how much of an invoice’s value they will provide as capital, minus their cut. MarketInvoice itself then makes money by taking a small cut too. The overall premise of MarketInvoice is to enable businesses (SMEs to larger companies) free up money owed before an outstanding invoice is paid, thus providing much needed liquidity. In turn, it gives investors a new asset class. Invoice finance and other forms of P2P lending play into a narrative that has seen banks reluctant to lend to small and medium-sized businesses and interest rates at a historic low. To that end, in August 2013 the British government stepped in via its British Business Bank initiative to invest £5 million through MarketInvoice’s platform (in addition to other P2P lenders), as part of a wider bid to pump liquidity into an otherwise stagnant credit market for SMEs. 2016-07-18 00:00 Steve O

41 Opera renegotiates its $1.2B sale down to $600M for its browsers, privacy apps, Chinese JV Some more developments over at Opera, the browser company based out of Norway. The company announced that an offer to acquire the company for $1.2 billion has now been terminated, and in the meantime, the deal has been renegotiated: the same group will now pay $600 million to acquire only certain parts of Opera’s business. Opera will sell the Qihoo 360-led consortium its mobile and desktop browser operations, its performance and privacy apps, its tech licensing not including Opera TV; and Opera’s 29 percent stake in Chinese JV nHorizon. Opera’s remaining business that is not part of the sale will include Opera Mediaworks, Opera Apps & Games (including Bemobi) and Opera TV, along with about 560 employees. As of Q1, Opera had 1,669 employees in its full operation. The Opera name and trademark will go the deal and the remaining company has some 18 months to find a new name, a company spokesperson told me. The new deal has already been approved by Opera’s board. The news today caps off a difficult time at Opera. The company — which competes against the likes of Google and others in browsers, advertising, and related services — has been looking for an exit for years (at one point, Facebook was among those rumored to be interested but that never came to anything), but in the end, the deal that was struck last February did not manage to get regulatory approvals (although shareholders supported it). “We all tried very hard to close the public offer and are naturally disappointed that we were unsuccessful. However, we believe that the new deal is very good for Opera employees and Opera shareholders,” said Opera’s CEO, Lars Boilesen. (Boilesen had been outspoken about the original deal being struck by shareholders without much buy-in from Opera’s staff. Some believed the original deal undervalued Opera). “The Consumer part has good fit with objectives and strategy of Consortium, and will become part of ecosystem with substantial investment capacity. For Opera shareholders we are selling approximately ¼ of the company for $600m, which is an attractive price for this part of our business.” Boilesen will serve as CEO for both Opera and the Consumer Business until December 31, 2016. “After this date, Lars will no longer hold the role as CEO for the Consumer Business, and will be solely dedicated to Opera,” the company said in its statement. Although Opera’s most public face is its mobile browser business (augmented more recently by its performance and privacy apps), on a financial level, this deal appears more lucrative for the Norwegian company and its investors. The parts of the business that Opera is keeping represented more than two- thirds of the company’s revenues in 2015, with sales of $467 million and adjusted Ebitda of $74 million ( full company revenues for that year were $616 million and adjusted Ebitda of $108 million). “Opera estimates that in 2016, the same three remaining business units will deliver revenues of $570-605 million (+22% to +30%) and an adj. EBITDA of $75-90 million (+2% to +22%),” the company said today. That part of the business is due to be reorganized in the wake of this deal, Opera said. One big question I have is how the advertising business will be structured. While services like Opera’s browsers may not have generated much revenue, they were also the basis of a lot of advertising inventory for the Mediaworks division. Update: “The reorganisation is linked to the business being sold, and has nothing to do with Mediaworks. A very minor part of Mediaworks revenues was linked to the Opera browser business,” a spokesperson tells me. The original, $1.2 billion deal had a deadline of July 15 to close, but it didn’t make the cut after failing to get regulatory approvals. This new deal now has a “drop-dead date” of October 31, 2016, with an automatic extension to December 31 if the two sides fail to get everything completed. The fact that the deal has a more flexible deadline date is a sign of how Opera is more willing to negotiate and look for a solution than the first time around. There are also break fees if this one doesn’t go through: specifically $100 million from Golden Brick Capital (the name of the consortium that is backed by Kunlun Tech Limited, Future Holding L. P., Keeneyes Future Holding Inc, Qifei International Development Co. Limited and Golden Brick Capital Private Equity Fund I L. P. Beijing Kunlun Tech Co. Ltd., Qihoo 360 Software (Beijing) Co. Ltd., and Golden Brick Silk Road Fund Management (Shenzhen) LLP) if they fail to close the deal, but only $40 million if the holdup is related to regulatory issues. The new transaction is expected to close during the second half of 3Q 2016, Opera said. 2016-07-18 00:00 Ingrid Lunden

42 42 Can we protect against computers being fingerprinted? This is the case with people's online browser "fingerprints", which are left behind at each location they visit on their internet browser. Almost like a regular fingerprint, a person's browser fingerprint – or "browserprint" – is often unique to the individual. Such a fingerprint can be monitored, tracked and identified by companies and hackers. Researchers at the University of Adelaide are working to find new methods of protecting against the fingerprinting of personal computers – and are now giving members of the community the chance to see firsthand their own computer browserprint. "Fingerprinting on computers is invisible to most people but there are companies out there who are already using these techniques to learn more information about individuals, about their interests and their habits," says Lachlan Kang, a Computer Science PhD student who is conducting this study as part of a wider project on privacy, within in the University's Schools of Computer Science and Mathematical Sciences. "This can be quite powerful information to have, especially if it's used to tailor advertising to you. In countries that are less benign than ours, it could also be used to spy on people," he says. "Computer users generally are growing in awareness of privacy issues , but currently there's little that can be done to counter fingerprinting. This is because fingerprints build up in between the websites you're visiting – your browsing history and personal information can be pooled in the gaps between those websites. Simply clearing your browsing history won't make any difference to this, because the information is already out there. " Mr Kang is seeking the public's help to better understand which fingerprinting techniques are the most powerful, so that he can help to build defences against them. "Eventually we hope that people will be able to protect themselves from being fingerprinted, or tracked without their consent. But in order to do this, we need to analyse a large number of online fingerprints – as many as 10,000 of them would be helpful. Currently we have 2500, which is a great start," he says. "No personal information will be retained for our project. We're simply looking for the data, which will be rendered anonymous for ethical reasons. " Explore further: Several top websites use device fingerprinting to secretly track users More information: For more information or to see your own browserprint, visit: browserprint.info 2016-07-18 00:00 phys.org

43 What Is Maven? The goal of this article is to provide a good overview of Apache Maven. To answer to the question, " What is Maven? ," we need to cover several topics, but for starters, let's say that Maven is a great tool that can sustain, support, and assist all stages involved in software development. Apache Maven is ready to serve you for creating a project from scratch, building, testing, reporting, documenting, guiding, ensuring code quality, transparent migration, project modularization, deployment, centralized remote repositories, and so forth, as a complex and comprehensive tool dedicated to software project management. Maven is an important "player" in Agile team collaboration. Downloading Apache Maven is a very simple task that can be accomplished via the download page. Independently of your operating system, just pick up the desired archive (ZIP or TAR. GZ): Figure 1: Download Apache Maven For testing the examples from this article, we have downloaded Apache Maven 3.3.9. The next step is also very simple. Extract the archive content in a convenient location on your computer and try to obtain a path without white-space characters (for example, in Windows, D:\Maven (used in this article) or D: ools\Maven-3.3.9 ). To work properly, Apache Maven needs the Java Development Kit (JDK). For example, Apache Maven 3.3.9 requires JDK 1.7 or above to execute. In case that you don't know if you have JDK installed, open a command prompt and type java-version. In Figure 2, you can see this command under Windows, but you can use it under Linux or Mac also: Figure 2: Check JDK presence If JDK is not installed, please follow these instructions. It is mandatory to successfully accomplish this step before going further. So, take your time and pay attention to this step. To start using Maven, you need to set a few environment variables as follows: PATH , M2_HOME , and MAVEN_HOME. M2_HOME and MAVEN_HOME must point to the Maven installation folder (for example, D:\Maven ) and the PATH must point to the /binD:\Mavenin ). This is pretty simple to achieve and depends on your OS, but here are some hints. On Windows, you can accomplish this via the Environment Variables wizard available in Control Panel |System and Security |System | Environment Variables . Figure 3: Setting PATH, M2_HOME, and MAVEN_HOME in Windows On Linux, you need to export the PATH , MAVEN_HOME , and M2_HOME environment variables in the .bashrc file. This file can be found in the user's home folder, which for me is /home/leonard/.bashrc : On a Mac, you need to export the PATH , MAVEN_HOME , and M2_HOME environment variables in the .bash_login : After you download and install Maven, it is a good practice to ensure that Maven is ready to go. For this, independent of the OS, you can execute mvn -version in a command-line terminal window. In Figure 4, you can see this under Windows: Figure 4: Verify Maven installation So, if you see output like in Figure 4, Maven is working as expected. From this point forward, the presented examples will consider Windows as the OS. Maven is basically a suite of plug-ins that work together or separately (plug- in centric) to accomplish different kind of tasks. For example, Maven allows us to create a Java project from Maven archetypes (templates) or from a pom.xml file. Among Maven archetypes we have: An archetype to generate a sample Maven project: maven-archetype- quickstart An archetype to generate a simplified sample J2EE application: maven- archetype-j2ee-simple An archetype to generate a sample Maven Web application project: maven- archetype-webapp ... To create a new Maven project from an archetype, we can use the following command (replace archetype_artifactId with the artifactId of the desired archetype): Or, if you also need to specify the groupId , use this: We also can choose the desired archetype during project creation via the following command: Okay, so let's try the mvn archetype:generate for creating (generating) a quickstart Java SE project. Navigate from the command line to a convenient location (where you want to store the project) and type the preceding command. At the beginning, you see something like in Figure 5: Figure 5: Create first Maven project The first thing you must specify is the archetype that you intend to use for this project. Before Maven prompts you to do that, you will see a very long list with the available archetypes (1500+). Each archetype is listed with a number (identifier), a name, and a short description meant to provide a hint about its purpose. By default, Maven will suggest that you choose the maven-archetype-quickstart archetype which is identified by the number 784. We have cropped the relevant part in Figure 6: Figure 6: Choose the desired archetype Because we want to go for maven-archetype-quickstart , simply press Enter. Furthermore, Maven will list the available versions for the selected archetype and suggest you to use the latest version. Again, you can just press Enter to go with the latest version: Figure 7 : Choose the archetype version After several downloads, you will be prompted to provide the project "landmarks" (known as Maven coordinates) such as groupId , artifactId , version , and package : So, for groupId , we have specified org.quickstart ; for artifactId , we have specified quickstart ; for version , we have specified 1.0 ; and for package , we have specified the same as for groupId : Figure 8: Finalization of project creation After we provide the Maven coordinates, the project is successfully created and is ready to go. In Figure 9, you can see what Maven has generated for us: Figure 9: Maven-generated Java SE project Another common type of projects generated via Maven archetypes are Java Web/enterprise projects. This time, let's create a Java EE 7 starter project for JBoss Wildfly by indicating the archetype groupId and artifactId right from command line (the code of this project is 1543): Figure 10 : Generate a Java Web project Take your time to check the generated project. As you will see, it has a significant number of folders and files. Compiling and testing the project are two of the main objectives that must be successfully accomplished to deliver a working software package. So, let's see how we can do that via Maven. To compile a Maven project, we need to execute from a command-line terminal window the following command: mvn compile (by default, Maven supports several goals and compile is one of them; among the most popular we have: clean , package , deploy, test , and install ). So, let's compile our Java SE and web project; notice that we execute this command from the project folder: Figure 11 : Compile the Java SE project The result of compilation (Java classes) can be seen in the /quickstart/target/classes folder. Now, let's compile the Java EE 7 application: Figure 12: Compile the Java Web project The result of compilation (Java classes) can be seen in the /javaee7.quickstart/target/classes folder. Another part of the default build lifecycle is represented by the capability of testing the project. To sustain a TDD (Test Driven Development) pattern, Maven provide the test feature via the mvn test command. Via this command, we will execute the tests located in the src/test folder. For our Java SE application, we have a single test named AppTest : Figure 13: Testing the Java SE application We should do the same for our Java EE 7 application. In this case, the test is also located in the src/test folder and it is named MemberRegistrationTest. But, this test is not as easy as for the Java SE app, so you must have some patience and read further. Later on, you will see this test also. Basically, the presence of a pom.xml file in a project is a clear indicator that this is a Maven project. Maven knows this file as the XML representation of the project. This file may contain a lot of information, such as metadata, dependencies, configurations, project organization, paths, profiles, and so forth. If you check, there are two projects; you can notice that both of them contain a pom.xml file in the root. Although the pom.xml file for the Java SE application is pretty short and simple, the pom.xml file for the Java EE 7 application is quite big and complex. So, let's see the structure of a POM file: Let's have a few words about each section and let's try to identify some of them in the pom.xml of the Java EE 7 application: In this part, we have the project coordinates, dependencies, dependency management, and inheritance details. Moreover, it may contain modules and some project-level properties. Take your time and try to dissect this part extracted from the Java EE 7 application pom.xml file; don't hesitate to carefully read the comments because is a good way of learning by example: This section is reserved for build details as your project's directory structure and managing plug-ins. For example, our pom.xml study case declares here the WildFly plug-in for deploying the application WAR to a local WildFly container: This section is useful for providing in-depth details about licenses, organization, developers, and contributors. In our case study, this section is like this: In this section, we have all information regarding the environment including issue management (defect tracking system), continuous integration management, mailing lists, SCM (Software Configuration Management), prerequisites, repositories, profiles, and so on. In our case study, we have several profiles: A detailed POM explanation can be found here. There are three built-in build lifecycles: default , clean , and site . When we refer to a Maven build lifecycle, we refer to a suite of phases that Maven is taking into account to build a project. Even though phases such as compile , test , or deploy are pretty well-known, you have to know that Maven actually has eight "major" phases, as shown in Figure 14 (there is also a set of "minor" phases not listed here): Figure 14: Maven build lifecycle The clean process is responsible for cleaning the project for files generated at compilation, additional files, and so forth. This is accomplished in three "minor" phases: The site process is able to generate and deploy the project's site documentation. It has four "minor" phases: Let's see several common commands that involve the build lifecycle phases. In the section Compile and test a project , you already saw mvn compile and mvn test at work, so let's see some more: Figure 15: Cleaning the Java EE 7 project Figure 16: Packaging the Java EE 7 project into a WAR Here, we must have a little discussion because this command is more tricky. If you try this command for the Java EE 7 application, you will receive an error like in Figure 17: Figure 17: Trying to deploy the Java EE 7 application To understand this idea, you must know a few additional things. First, we must distinguish between repository deployment and application server deployment. The default deploy phase of Maven is considering repository deployment , which means that you try to deploy your artifact (application) on a remote repository (release or snapshot) location as Nexus, Artifactory, or even Maven Central. Basically, we try to use the Maven Deploy Plugin , which has two goals (known as Mojos). The first one is deploy:deploy (the deploy phase of the build lifecycle is implemented via this Mojo), and the second one is deploy:deploy-file (not discussed here). Now, we will focus on the first Mojo, deploy:deploy. To work, this Mojo needs a section in pom.xml. This section should supply at least a defining the remote repository location for your artifact. When your repository also needs authentication, you need a entry in your settings.xml file (this file can be found in MAVEN_HOME/conf folder). So, our error message means that we never specified a remote repository. Now, in terms of Java EE, by deploy, we understand application server deployment. This means that some application server (for example, WildFly, GlassFish, Tomcat, and so on) will be aware of our application and it will expose it for usage in a browser. So, asking Maven to deploy an application will not result in its application server deployment. To accomplish application server deployment , you need a special plugin, such as the WildFly Maven Plugin. This plugin comes with several goals , as: wildfly:deploy , wildfly:run , and the like. Let's suppose that we want to deploy our Java EE 7 application on a WildFly application server. We can accomplish this very easily because our pom.xml comes with this plugin configured. This is normal, because we have generated the project from a WildFly dedicated Maven archetype. Here is the plugin: This means that we can use the goals of WildFly Maven Plugin out-of-the- box. For example, if you want Maven to download WildFly, start it and deploy our Java EE 7 application; then, you can simply execute the mvn wildfly:run command. Just ensure that the default WildFly ports are free (8080 and 9990). When Maven finishes the job, you will be able to see the application at http://localhost:8080/javaee7.quickstart: Figure 18: Deploying our Java EE 7 application on WildFly application server via Maven If you already have WildFly installed, you can start it easily and execute the mvn wildfly:deploy command. This will deploy the application on your WildFly. Check out the rest of goals here. Among the advantages provided by Maven, we have project portability. Maven tries to keep the projects independent (portable) across filesystem references via local repository and pom.xml configurations. When portability is affected (which means that the independency of the filesystem cannot be achieved), Maven provides a solution materialized in profiles. Profiles are defined in pom.xml and they are demarcated by the tag. Profiles can be triggered: through the command line using the -P option, via Maven settings, or environment specific triggers. If we take a closer look to the pom.xml of our Java EE 7 application, we notice several profiles, as follows (please, read the comments because they help you to understand what each profile does): As you can see, all the above profiles are dedicated to testing the application. For example, let's try to test our Java EE 7 application via mvn clean test -Parq-wildfly-remote. First, download, install, and start JBoss WildFly 10.0.0 with default settings (ports 8080 and 9990) on your computer. You can easily start it from the command line by executing the standalone batch file: Figure 19: Running standalone WildFly from the command line Furthermore, open a new command prompt and execute mvn clean test - Parq-wildfly-remote. You may notice that Maven will deploy the application on WildFly, execute the tests, and un-deploy the application: Figure 20: Execute tests You can read more about Maven profiles here. As you saw in the previous sections, Maven builds a project by executing several phases of the build lifecycle. You know that, to test an application, you can sequentially execute the following commands: Although these are individual commands executed one by one, Maven allows us to automate the process by executing the last of the phases (we choose which one to be the last). When you specify a certain phase, Maven will execute every build phase that is before the called build phase. For example, if you execute the test phase, Maven will execute all build phases before it. This means that instead of executing four different commands, we can simply execute only the last one as mvn test. Maven will execute validate , compile , and package as a result of automation skills. Or, if we execute mvn deploy , Maven will execute all build phases. Maven project modularization, or multi-modular projects, consist of a Parent Project that contains Child Projects known as Modules. Practically, this is a technique of organizing large projects that can be divided in sub-modules. The Parent Project contains a POM file that defines the containing sub- modules. Each sub-module has its own POM, also. Check Figure 21: Figure 21: Maven modularization To provide an example, we start by creating a new pom.xml file in the main application folder. We name this folder MyBigApp and the pom.xml looks like the following; it's important to notice the pom part: Further, navigate from the command line in the MyBigApp folder and generate the exact same projects as we did earlier in this article. First, the Java SE quickstart: Figure 22: Generate the Java SE quickstart application Next, create the Java EE 7 quickstart application: Figure 23: Generate the Java EE 7 quickstart application After the job is done, take a look into the main pom.xml file and notice the section: Now, you can compile the entire project from the MyBigApp folder via mvn compile : Figure 24: Compile a project based on multiple modules Probably the most known feature of Maven is represented by the dependency management (managing the dependency of the current project of other libraries, projects, and tools). Since version 2, Maven provides transient dependency, which means that Maven will automatically discover artifacts that your dependencies require. In POM, the dependencies section is demarcated by the tag and each dependency is demarcated by the tag. Essentially, the code skeleton looks like this: There are six dependencies scopes as follows (the scope is specified in the tag): The dependency plug-in is a very useful tool. As you can see here , this plug-in comes with a significant number of goals. Among these, the most used are listed below: dependency:analyze : Performs dependencies analysis and reports the dependencies that are: used and declared; used and undeclared; unused and declared. For our Java EE 7 application, you can see the output in Figure 25: Figure 25: Executing dependency:analyze dependency:analyze-duplicate : Performs an analysis of the and tags and determines the duplicate declared dependencies: Figure 26: Executing dependency:analyze-duplicate dependency:resolve : Instructs Maven to resolve all dependencies and displays the version. dependency:resolve-plugin : Resolves all plug-ins. dependency:tree : Displays dependency trees. To generate Javadoc (API documentation in HTML), we can use the Maven Javadoc plug-in. Among its 10+ goals, we have the one that generates the documentation for the project. This is javadoc:javadoc and you can see it at work in Figure 27: Figure 27: Executing javadoc:javadoc For the Java EE 7 project, the documentation will be located in the javaee7.quickstart/target/site/apidocs folder. Simply open the index.xhtml file in your browser. The Maven Surefire plug-in is fully capable to run the unit tests, generate plain text, and XML reports of the tests. The only goal of this plug-in is surefire:test. Normally, you will have this plug-in already available in your pom.xml or you should add it manually, as shown below: To execute the Sunfire plug-in, simply execute the command mvn test. To see the output to the console, use mvn test -Dsurefire.useFile=false . In this article, we have covered some of the main aspects of Maven. You have to know that Maven can do more than this, and it is a very important tool in any developer arsenal. In addition to what was discussed here, you may also be interested in generating code coverage reports, creating centralized remote repositories, Java/Google/Scala/Groovy/Flex... and Maven, IDE integration, and so on. 2016-07-18 00:00 Leonard Anghel

44 Streamline Your Understanding of the Java I/O Stream The Java I/O stream library is an important part of everyday programming. The stream API is overwhelmingly rich, replete with interfaces, objects, and methods to support almost every programmer's needs. In view of providing every need, the stream library has become a large collection of methods, interfaces, and classes with a recent extension into a new package called NIO.2 (New I/O version 2). It is easy to be lost among the stream implementation, especially for a beginner. This article shall try to provide some clue to streamline your understanding of I/O streams APIs in Java. Stream literally means continuous flow, and I/O stream in Java refers to the flow of bytes between an input source and output destination. The type of sources or destination can be anything that contains, generates, or consumes data. For example, it may be a peripheral device, a network socket, a memory structure like an array, disk files, or other programs. After all, bytes are bytes; reading data sent from a server network stream is no different than reading a local file. Similar is the case for writing data. The intriguing part of Java I/O is its unique approach, very different from how I/O is handled in C or C++. Although the data type may vary along with I/O endpoints, the fundamental approach of the methods in output and input stream is same all throughout Java APIs. There will always be a read method for the input stream and a write method for the output stream. After the stream object is created, we almost can ignore the intricacies involved in realizing the exact details of I/O processing. For example, we can chain filter streams to either an output stream or an input stream, and modify the data in the process of a read or write operation subsequently. The modification can be like applying encryption or compression or simply provide methods to convert data into other formats. The readers and writers, for example, can be chained to an input and output stream to realize character streams rather than bytes. Readers and writers can handle a variety of character encoding such as multi byte Unicode characters (UTF-8). Thus, a lot goes on behind the scenes, even if it is seemingly a simple I/O flow from one end to another. Implementing them from scratch is by no means simple and needs to go through the rigor of extensive coding. Java Stream APIs handle these complexities, giving developers an open space to concentrate on their productive ends rather than brainstorm on the intricacies of I/O processing. One just needs to understand the right use of the API interfaces, objects, and methods and let it handle the intricacies on their behalf. The classes defined in the java.io package implements Input/Output Stream, File, and Serialization. File is not exactly a stream, but stream operations are the means to achieve file handling. File actually deals with file system manipulation, such as read/write operations, manipulating their properties, disk access, permissions, subdirectory navigation, and so forth. Serialization, on the other hand, is the process of persisting Java objects into a local or remote machine. Complete delineation is out of scope of this article; instead, here we focus only on the I/O streaming part. The base class for I/O streaming is the abstract classes InputStream and OutputStream , and later these classes are extended to to have some added functionality. They can be categorized intuitively as follows. Figure 1: The Java IO Stream API Library Byte Stream classes are mainly used to handle byte-oriented I/O. It is not restricted to any particular data type, though, and can be used with objects including binary data. The data is translated into 8-bit bytes for I/O operations. This makes byte stream classes suitable for I/O operations where a specific data type does not matter and can be dealt with in binary form as well. Byte Stream classes are mainly used in network I/O such as socket or binary file operation, and so on. There are many Byte Stream classes in the library; all are the extension of an abstract class called InputStream for input streaming and OutputStream for output streaming. An example of the concrete implementation of byte stream classes is: Character Stream deals with Unicode characters rather than bytes. Sometime the character sets used locally are different, non-Unicode. Character I/O automatically translates a local character set to Unicode upon I/O operation without extensive intervention of the programmer. Using Character Stream is safe for future upgrades to support Internationalization even though the application may use a local character set such as ASCII. The character stream classes make the transformation possible with very little recoding. Character stream classes are derived from abstract classes called Reader and Writer. For example, the character stream reader that handles the translation of character to bytes and vice versa are: Sometimes, the data needs to be buffered in between I/O operations. For example, an I/O operation may trigger a slow operation like a disk access or some network activity. These expensive operations can bring down overall performance of the application. As a result, to reduce the quagmire, Java platform implements a buffered (buffer=memory area) I/O stream. On invocation of an input operation, the data first is read from the buffer. If no data is found, a native API is called to fetch the content from an I/O device. Calling a native API is expensive, but if the data is found in the buffer, it is quick and efficient. Buffered stream is particularly suitable for I/O access dealing with huge chunks of data. Data streams are particularly suitable for reading and writing primitive data to and from streams. The primitive data type values can be a String or int, long, float, double, byte, short, boolean, and char. The direct implementation classes for Data I/O stream are DataInputStream and DataOuputStream , which implements DataInput and DataOutput interfaces apart from extending FilterInputStream and FilterOutputStream , respectively. As the name suggests, Object Stream deals with Java objects. That means, instead of dealing with primitive values like Data Stream objects, Object Stream performs I/O operations on objects. Primitive values are atomic, whereas Java objects are composite by nature. The primary interfaces for Object Stream are ObjectInput and ObjectOutput , which are basically an extension of the DataInput and DataOutput interfaces, respectively. The implementation classes for Object Stream are as follows. As Object Stream is closely associated with Serialization. The ObjectStreamConstants interface provides several static constants as stream modifiers for the purpose. Refer to Java Documention for specific examples of each stream type. Following is a rudimentary hierarchy of Java IO classes. Figure 2: A rudimentary hierarchy of Java IO classes Input stream classes are derived from the abstract class java.io. InputStream. The basic operations of this class are as follows: All output stream classes are the extension of the abstract class java.io. OutputStream. It contains the following variety of operations: It may seem overwhelming at the beginning, but observe that no matter which extension classes you use, you'll end up using these methods for I/O streaming. For example, ByteArrayOutputStream is a direct extension of the OutputStream class; you will use these methods to write into an extensible array. Similarly, FileOutputStream writes onto a file, but internally it uses native code because "File" is a product of the file system and it completely depends upon the underlying platform on how it is actually maintained. For example, Windows has a different file system than Linux. Observe that both the OutputStream and InputStream provide a raw implementation of methods. They do not bother about the data formats we want to use. The extension classes are more specific in this matter. It may happen that the supplied extension classes are also insufficient in providing our need. In such a situation, we can customize our own stream classes. Remember, the InputStream and OutputStream classes are abstract, so they can be extended to create a customized class and give a new meaning to the read and write operations. This is the power of polymorphism. Filter Streams, such as PushBackInputStream and PushbackOutputStream and other sub extensions, provide a sense of customized implementation of the stream lineage. They can be chained to receive data from a filtered stream to another data packet along the chain. For example, a compressed network stream can be chained to a BufferedInputStream and then to a compressed data through CipherInputStream to GZIPInputStream and then to a InputStreamReader to ultimately realize the actual data. Refer to the Java API documentation for specific details on the classes and methods discussed above. The underlying principles of stream classes are undoubtedly complex. But, the interface surfaced through the Java API is relatively simple enough to ignore the underlying details. Focus on these four classes: InputStream , OutputStream , Reader , and Writer. This will help to get a grip on the APIs initially and then use a top-down approach to learn its extension. I suppose this is the key to streamline your understanding of the Java I/O stream. Happy learning! 2016-07-18 00:00 Manoj Debnath

45 Using the Executor Framework to Deal with Java Threads Threads provide a multitasking ability to a process (process = program in execution). A program can have multiple threads; each of them provide a unit of control as one of its strands. Single threaded programs execute in a monotonous, predictable manner. But, a multi-threaded program brings out the essence of concurrency or simultaneous execution of program instruction where a subset of code executes or is supposed to execute in parallel mode. This mechanism leverages performance, especially because modern processing workhorses are multi core. So, running a single threaded process that may utilize only one CPU core is simply a waste of resources. Java core's APIs includes a framework called Executors Framework, which provides some relief to the programmer when working in a multi-threaded arena. This article mainly focuses on the framework and its uses with a little background idea to begin with. Parallel execution requires some hardware assistance, and a threaded program that brings out the essence of parallel processing is no exception. Multi-threaded programs can best utilize multiple CPU cores found in modern machines, resulting in manifold performance boost. But, the problem is that maximum utilization of multiple cores requires a program's code to be written with parallel logic from the ground up. Practically, this is easier said than done. In dealing with simultaneous operations where everything is seemingly multiple, problems and challenges are also multi- faceted. Some logics are meant to be parallel whereas some are very linear. The biggest problem is to balance between them yet keep up with maximal utilization of processing resources. Parallel logic is inherently parallel, whose implementation is pretty straightforward, but converting a semi-linear logic into an optimal parallel code can be a daunting task. For example, the solution of 2 + 2 = 4 is quite linear but the logic to solve expression such as (2 x 4) + (5 / 2) can be leveraged with parallel implementation. Parallel computing and concurrency, though closely related, are yet distinct. This article uses both words to mean same thing to keep it simple. Refer to https://en.wikipedia.org/wiki/Parallel_computing to get a more elaborate idea on this. There are many aspects to be considered before modeling a program for multi-threaded implementation. Some basic questions to ask while modeling one are: When creating a task (task = individual unit of work) , what we normally do is either implement an interface called Runnable or extend the Thread class: And, create the task as follows: And then execute each task as follows: To get a feedback from individual task, we have to write additional code. But, the point is that there are too many intricacies involved in managing a thread execution, such as creation and destruction of a thread, has a direct bearing on the overall time required to start another task. If it is not performed gracefully, unnecessary delay in the start of a task is certain. A thread consumes resources, so multiple threads may consume multiple resources. This has a propensity to slack overall CPU performance; worse, it can crash the system if the number of threads exceeds the permitted limit of the underlying platform. It also may happen that some thread consumes most of the resources leaving other threads starved, or a typical race condition. So, the complexity involved in managing thread execution is easily intelligible. The Executor Framework attempts to address this problem and bring some controlling attributes. The predominant aspect of this framework is to state a clear demarcation between the task submission from task execution. The executor says, create your task and submit it to me; I'll take care of the rest (execution details). The mechanics of this demarcation is attributed to the interface called Executor under the java.util.concurrent package. Rather than creating thread explicitly, the code above can be written as: and then Calling the executor method does not ensure that the thread execution is initiated; instead, it merely refers to a submission of a task. The executor takes up the responsibility on behalf, including the details about the policies to adhere to in the course of execution. The class library supplied by the executor framework determines the policy, which, however, is configurable. There are many static methods available with the Executors class (Note that Executor is an interface and Executors is a class. Both included in the package java.util.concurrent ). A few of the commonly used are as follows: All of these methods return an ExecutorService object. The ExecutorService interface extends Executor and provides necessary methods to manage execution of threads, such as the shutdown () method to initiate an orderly shutdown of threads. There is another interface, called ScheduledExecutorService , which extends ExecutorService to support scheduling of threads. Refer to Java Documentation for more details on these methods and other service details. Note that the use of executor is highly customizable and one can be written from scratch. Let's create a very simple program to understand the use of an executor. The Executor Framework is one of much assistance provided by the Java core APIs, especially in dealing with concurrent execution or creating a multi-threaded application. Some other assisting libraries useful for concurrent programming are explicit locks, synchronizer, atomic variables, and fork/join framework, apart from the executor framework. The separation of task submission from task execution is the greatest advantage of this framework. Developers can leverage this to reduce many of the complexities involved in executing multiple threads. 2016-07-18 00:00 Manoj Debnath

46 What Is Full Stack Development? What is a full stack engineer when you boil it down to its essence? If you're someone already in the community, visualize the vast number of people who bandy this term around. Truthfully, "what is a full stack engineer" is a very difficult question to answer because it's a term that has been used for years by engineers of varying skill levels to mean different things. As new trends are introduced, it causes a particular overloading to occur as "full stack" adopts new responsibilities. I do believe there's a chance to save this term and agree on what it means. I do believe that the meaning of the term does mature as the engineer does, so I will break this down into two overarching sections. First, I will explore what novices mean when they use the term. In the second half, I will explore how the term has evolved to incorporate deployment. These explanations build off each other, as you will soon see. In my experience, when I hear novices use the term "full stack engineer" what I immediately think is "Oh, they must know an N-tier application setup. " For those unfamiliar, multitier (aka n-tier) is simply an architecture that divides up a Web application into layers or tiers. The most common division is the presentation tier, the middle or logic tier, and the data tier. Sometimes, there are more divisions between these layers, but in general, other setups are variations of this simple theme. Let's briefly dig into what each tier denotes. The data layer, or persistence tier, is where you write an application's runtime data onto the system's hard drive. The most common technologies used are relational databases, document storage (noSQL), and finally even a simple file to store data. Common relational database management systems include Oracle, MSSQL, MySQL, and PostgreSQL. Common noSQL technologies are MongoDB, Redis, or Cassandra. Common file formats are simple text files with no formatting to CSV files. The ultimate point of this layer is to extract data from volatile runtime of your Web system's various processes and store in such a way so that the next time you start or restart your processes, it is still accessible. Some divisions end the data layer at the system used to manage the data; others also include the mechanism for access. The middle tier is the layer that sits between the presentation and the data layer. Its responsibility is to safely retrieve and place data requested or given from the consuming client and pass along to the data layer. Naturally, this often involves more complicated tasks like authorization, authentication, API endpoint design, or creating logic to enforce business logic, and so forth. There are very many flavors of technology choices, so finding the language that's right for you should be simple. Personally, I enjoy Scala and NodeJS, but you may find that Java, Python, Ruby, PHP, or countless others may take your fancy. Finally, we have the presentation layer. In a Web application, this means the HTML, JavaScript, and CSS that users interact with. Other application types are desktop application, mobile applications, and now even voice interfaces. However, sticking with strictly Web applications, this is where you'll find the Web page that the client works with. Ultimately, the client-side code allows users to view data and modify it as they wish. There are numerous Web frameworks to quickly build applications. For example, a very popular Node framework is MEAN. Breaking it down, MEAN stands for MongoDB, Express, AngularJS, and Node. MongoDB is your persistence mechanism, Node and Express serve as the middle tier, and AngularJS is a single page application that serves as the presentation layer. For the Python stack, there a number of frameworks. Django is a highly opinionated application that gives you the complete toolbox right from the start. However, if you prefer to build your own framework, you may decide that using Flask will suit your needs more. The Ruby stack is Ruby on Rails, which is a great framework that is used by start ups or to quickly prototype applications. Finally, for PHP you may choose a Ruby on Rails-like framework like Laravel, or go for a more traditional LAMP stack. LAMP stands for Linux Apache, MySQL, and PHP. LAMP is interesting because it touches on the operating system, Linux, which is crucial for my next section. Now that I've described the building block of full stack development, it's time to overload the term in a way that is the most common as you become more experienced: deployment and infrastructure. There's been a relatively new trend in Web development that was introduced to fill a vacuum between site operations (manage production Web site) and development: devops. The goal of devops is to focus on deploying applications, assist in monitoring health, and in general making development environments easy to work with. I'm glossing over many details, but to fully appreciate full stack development, one must be able to pick out responsibilities from devops. Deploying your application could mean you literally SSH into a server, and then install all dependencies (your persistence technology choice, your server-side technology choice, and so on). Or, it could mean you choose a "infrastructure as code" solutions such as Chef, Puppet, Ansible, or Salt. Going a step further, you may even want to investigate containerizing your application by using Docker or creating images by using Packer. Although you may not be personally involved in the implementation details of this, knowing this piece is crucial for a full stack engineer because they will be expected to deploy their complicated applications once they've been created. So, as you can see, being a full stack engineer represents being able to acquire a vast knowledge of many pieces of your Web application. I've shown that experience modifies your understanding of the term. Deciding to become a full stack engineer requires looking at the puzzle from further away than specialists, but once you figure out how everything fits together, it's quite a thrill! 2016-07-18 00:00 Andrew Allbright

47 Using Angular Typeahead A program's requirement may be to allow the user to key in some characters and, based on the keyed-in value, display the matching results. This is where we use typeaheads to display results based on the entered text in a text box. In this article, we will populate the typeahead dynamically from a Web service and the data displayed will be in a tabular format with headers. This is all implemented using Angular JS. This article assumes that a Web service already exists and that it returns the data in a JSON format. For this example, we will use an API that returns the country name and the capital based on the characters keyed in by the user. To implement typeahead, we use the angular directive "typeahead. " The syntax is as follows: Script 1 There are two main components in the preceding syntax: Uib-typeahead again has two parts: Method getCountry($viewValue) retrieves the data from the Web API. The definition of the function is as shown next: Script 2 $viewValue is a default parameter provided in Angular that contains the text that the user keys in. The 'getCountry' function takes the keyed-in value as a parameter and calls the REST API. The response returned from the Web API is the function's return value. Because REST API is an asynchronous call, there are two return statements: one when the Web API is invoked— $http.get —and one within the success callback of the Web API method. This ensures that the data is passed on to the "typeahead-popup-template-url" for display. Method countryTypeAheadLabel formats the data or the country selected. Because, in this example, country is an object, it cannot be directly bound. If multiple properties are to be displayed for the selected data, this function is used for formatting the data. Script 3 The countryTypeAheadLabel function verifies if any country is selected. If selected, it concatenates and returns the country name and the capital. Next is the " typeahead-popup-template-url ". This is an ng-template that displays the result in a tabular format. Script 4 A template is defined in a script tag with type as 'text/ng-template'. Lines 8- 17 define the header columns for the tabular display data. Lines 18-27 display the data. The content is stored in an collection named 'matches' and the syntax to access the object properties is "match.model. ". In the preceding example, we are displaying the country name and the capital. The output looks as shown in Figure 1 when the user is keying in the text. Figure 1: The output from keying in the text Once the user selects one of the countries, it is displayed as shown in Figure 2: Figure 2: After a country has been selected In this example, we are displaying only two columns: the country and the capital. It can be used to display multiple columns by modifying the template; styling it will bootstrap classes. In this example, the data was retrieved asynchronously and bound to the typeahead. The Web API is written such that it can take the keyed-in value as an input parameter and return the data; in other words, the filtering for the data is done within the Web API. The data is then bound to the template to display the data in a tabular format. The Web service is invoked for every letter keyed in. This can potentially be a costly operation if the Web API takes time to return the data. Optimization techniques such as server side caching can be considered to improve the performance. https://angular-ui.github.io/bootstrap/ 2016-07-18 00:00 Uma Narayanan

48 Working with Java Optional Classes The interesting part of the utility classes (in the java.util package) is that they are not absolute necessary, yet they end up providing invaluable help in some form or manner. In fact, the utility classes are built to leverage productivity. Programmers can use and reuse them as and when required. The Optional class is one such class found in the java.util package. The class can be quite useful on occasions that suit the idea behind its existence. Let's understand its utility and what impetus it has on Java programming. Before going into the details of optional classes, let's first understand a common Java exception, called NullPointerException. Most of us have encountered this exception many times while programming. A NullPointerException occurs when we try to use an object reference pointing to nothing or a null value. For example, in the following code, we have declared a primitive type variable such as an integer and later initialized it with a value. When the variable iVal is declared, Java automatically initializes it with 0. Later, when we assign a value of 20, the memory location referred by iVal is overwritten with the assigned value. Now, this is fine for a simple primitive type. See what happens when the variable is a reference types, such as The variable peter here is a reference type and points to nothing, or a null value, in the first line. In the second line, with the new keyword a valid object is created in the heap that peter is the reference the variable points to. Now, observe what happens if we try to access a member function of the Person class before creating the object with new keyword as follows. Voilà! A NullPointerException occurs. One way to eradicate this problem is to check if the object reference is really null before accessing it. This may be fine when we are directly creating an object. But, there are situations where we do not create object directly, such as Here, we do not create a Person object directly. Instead, we rely on the reference type to send as an argument to the function call: The function definition does not expect to receive a null value like this However, to be on the safe side, we can always check on null value before accessing it. But, this is a cumbersome solution. Here comes the utility of the Optional class that tries to provide a way out of this type of problem. Let's get familiar with the trait that this API exhibits and what it has to offer. Optional classes are introduced into the core Java API from version 1.8. There are basically four classes under the banner of optional classes: Optional , OptionalInt , OptionalDouble , and OptionalLong , found in the java.util package. Figure 1: Contents of the java.util package All of them are declared as final ; that means they cannot be extended by inheritance. Among them, Optional is a generic classes. The number of member functions of each OptionalInt , OptionalDouble , and OptionalLong class is same by name and utility. The only variance is in their use of data types, such as OptionalDouble is concerned with double , OptionalInt with int, and OptionalLong with long data types. All optional classes provide a container for objects. The contained object may be present or may not be present. The problem of NullPointerException is completely evaded by being able to get a boolean true or false , according to the presence of a value or not in the container with the help of isPresent method. The get method returns the value if present; otherwise, NoSuchElementException is thrown. NoSuchElementException is an extension of RuntimeException , thrown to indicate that the element being requested does not exist. Among all the optional classes, Optional is the generic implementation. So, if we understand its implementation and how to use this class, understanding other optional classes such as OptionalDouble , OptionalInt , and OptionalLong should not be a problem. So, let's explore this class alone. Refer to Java API Documentation for more details. Refer here for a code that compares the use of optional class and one without using it by Brian Goetz. And, refer to the article Tired of Null Pointer Exceptions… by Raoul-Gabriel Urma for more details. The whole point of the optional class is to provide the means to indicate the absence of a value rather than using null to mean no value present. There is no harm in using null to mean no value, but optional classes may be used to indicate an optional parameter of a class or pass an optional value to a function. So, the ultimate utility of this class boils down to three basic principles: avoiding a null pointer, use optional to mean a optional parameter that may or may not contain a value, and lastly as a design goal to attain fluidity of code flow. 2016-07-18 00:00 Manoj Debnath

49 49 Introducing ASP. NET Core Dependency Injection If you developed professional Web applications using ASP. NET MVC, you are probably familiar with Dependency Injection. Dependency Injection (DI) is a technique to develop loosely coupled software systems. ASP. NET MVC didn't include any inbuilt DI framework and developers had to resort to some external DI framework. Luckily, ASP. NET Core 1.0 introduces a DI container that can simplify your work. This article introduces you to the DI features of ASP. NET Core 1.0 so that you can quickly use them in your applications. To understand how Dependency Injection works in ASP. NET Core 1.0, you will build a simple application. So, begin by creating a new ASP. NET Core 1.0 Web application by using an Empty project template. Figure 1: Opening the new template Then, open the Project.json file and add dependencies as shown below: (You can get Project.json from this code download .) Make sure to restore packages by right-clicking the References folder and selecting Restore Packages from the shortcut menu. Then, create a DIClasses folder under the project root folder. Add an interface named IServiceType to the DIClasses folder. A type that is to be injected is called a service type. The IServiceType interface will be implemented by the service type you create later. The IServiceType interface is shown below: The IServiceType interface contains a single method—GetGuid(). As the name suggests, an implementation of this method is supposed to return a GUID to the caller. In a realistic case, you can have any application-specific methods here. Then, add a MyServiceType class to the Core folder and implement IServiceType in it. The MyServiceType class is shown below: The MyServiceType class implements an IServiceType interface. The class declares a private variable—guid—that holds a GUID. The constructor generates a new GUID using the Guid structure and assigns it to the guid private variable. The GetGuid() method simply returns the GUID to the caller. So, every object instance of MyServiceType will have its own unique GUID. This GUID will be used to understand the working of the DI framework as you will see later. Now, open the Startup.cs file and modify it as shown below: Notice the line shown in bold letters. This is how you register a service type with the ASP. NET Core DI container. The AddScoped() method is a generic method and you mention the interface on which the service type is based (IServiceType) and a concrete type (MyServiceType) whose object instance is to be injected. A type injected with AddScoped() has a lifetime of the current request. That means each request gets a new object of MyServiceType to work with. Let's test this by injecting MyServiceType into a controller. Proceed by adding HomeController and Index view to the respective folders. Then, modify the HomeController as shown below: The constructor of the HomeController accepts a parameter of IServiceType. This parameter will be injected by the DI framework for you. Remember that, for the DI to work as expected, a type must be registered with the DI container (as discussed earlier). The IServiceType injected by the DI framework is stored in a private variable—obj—for later use. The Index() action calls the GetGuid() method on MyServiceType object and stores the GUID in ViewBag's Guid property. The Index view simply outputs this GUID as shown below: Now, run the application and you should see something like this: Figure 2: Viewing the GUID Refresh the browser window a few times to simulate multiple requests. You will observe that a new GUID is displayed every time. This confirms the working of AddScoped() as discussed previously. There are two more methods that can be used to control the lifetime of the injected object—AddTransient() and AddSingleton(). A service registered using AddTransient() behaves such that every request for an object gets a new object instance. So, if a single HTTP request requests a service type twice, two separate object instances will be injected. A service registered using AddSingleton() behaves such that all the requests to a service are served by a single object instance. Let's test these two methods, one by one. Modify Startup.cs as shown below: In this case, you used the AddTransient() method to register the service type. Now, modify the HomeController like this: This time, the HomeController has two parameters of IServiceType. This is done just to simulate two requests to the same service type. The GUIDs returned by both the object instances are stored in the ViewBag. If you output the GUIDs on the Index view, you will see this: Figure 3: Viewing the GUIDs on the Index view As you can see, the GUIDs are different within a single HTTP request, indicating that different object instances are getting injected into the controller. If you refresh the browser window, you will get different GUIDs each time. Now, modify Startup.cs and use AddScoped() again to register the type. Run the application again. Did you notice the difference? Now, both the constructor parameters point to the same object instance, as confirmed by the GUIDs. Now, change Startup.cs to use the AddSingleton() method: Also, make corresponding changes to the HomeController (it will now have just one parameter) and the Index view. If you run the application and refresh the browser as before, you will observe that for all the requests the same GUID is displayed, confirming the singleton mode. 2016-07-18 00:00 Bipin Joshi

50 A Deeper Look: Java Thread Example The concept of thread is intriguing as we dive deeper from different perspective of its construct apart from the gross idea of multitasking. The Java API is rich and provides many features to deal with multitasking with threads. It is a vast and complex topic. This article is an attempt to engross the reader in some concepts that would aid in better understanding Java threads, eventually leading to better programming. A program in execution is called a process. It is an activity that contains a unique identifier called the Process ID, a set of instructions, a program counter—also called instruction pointer—handles to resources, address space, and many other things. A program counter keeps track of the current instruction in execution and automatically advances to the next instruction at the end of current instruction execution. Multitasking is the ability of execute more than one task/process at a single instance of time. It definitely helps to have multiple CPUs to execute multiple tasks all at once. But, in a single CPU environment, multitasking is achieved with the help of context switching. Context switching is the technique where CPU time is shared across all running processes and processor allocation is switched in a time bound fashion. To schedule a process to allocate the CPU, a running process is interrupted to a halt and its state is saved. The process that has been waiting or saved earlier for its CPU turn is restored to gain its processing time from the CPU. This gives an illusion that the CPU is executing multiple tasks, while in fact a part of the instruction is executed from multiple processes in a round robin fashion. However, the fact is that true multiprocessing is never possible, even with multiple CPUs, not because of the machine limitation but because of our limitation to handle true multiple processing effects. Parallel execution of 2/200 instruction does not make a machine multiprocessor; rather, it extends or limits its capability to a cardinal precision. Exact multiprocessing is beyond humane scope and can be harnessed only by the essence of it. There is a problem with independent execution of multiple processes. Each of them carries a load of a non-sharable copy of resources. This can be easily shared across multiple running processes, yet they are not allowed to do so because processes usually do not share address spaces with another process. If they must, they can communicate only via some of the inter-process communication facilities such as sockets or pipes, and so forth. This poses several problems in process communication and resource sharing, apart from making the process what is commonly called heavy- weight. Modern Operating Systems solved this problem by creating multiple units of execution within a process that can share and communicate across its execution unit. Each of these single units of execution is called a thread. Every process has at least one thread and can create multiple threads, only bounded by the operating system's limit of allowed shared resources, which usually is quite large. Unlike a process, a thread has only a couple of concerns: Program Counter and a Stack. A thread within a process shares all its resources, including the address space. A thread, however, can maintain a private memory area called Thread Local Storage, which is not shared even with threads originating from the same process. The illusion of multi-threading is established with the help of context switching. Unlike context switching with the processes, context switch between threads is less expensive because thread communication and resource sharing is easier. Programs can be split into multiple threads and executed concurrently. A modern machine with a multi- core CPU further can leverage the performance with threads that may be scheduled on a different processor to improve overall performance of program execution. A thread is associated with two types of memory: main memory and working memory. Working memory is very personal to a thread and is non- sharable; main memory, on the other hand, is shared with other threads. It is through this main memory that the threads actually communicate. However, every thread also has its own stack to store local variables, like the pocket where you keep quick money to meet your immediate expenses. Because each thread has its own working memory that includes processor cache and register values, it is up to the Java Memory Model (JMM) to maintain the accuracy of the shared values across multiple threads that may be accessed by two or more competing threads. In multi-threading, one update operation to a shared variable in the memory area can leave it in an inconsistent state unless coordinated in such a way that some other thread must get an accurate value even in some random read/write operation on the shared variable. JMM ensures reliability with various housekeeping tasks, some of them are as follows: Atomicity guarantees that a read and write operation on any field is executed indivisibly. Now, what does that mean? According to the Java Language Specification (JLS), int, char, byte, float, short, and boolean operations are atomic but double; long operations are not atomic. Here's an example: Because it is internal, it involves two separate operations: one that writes first 32 bits and the second writes last the 32 bits, to assign a 64 bit value. Now, what if we are running a 64 bit Java? The Java Language Specification (JLS) reference provides the following explanation: "Some implementations may find it convenient to divide a single write action on a 64-bit long or double value into two write actions on adjacent 32-bit values. For efficiency's sake, this behaviour is implementation- specific; an implementation of the Java Virtual Machine is free to perform writes to long and double values atomically or in two parts. Implementations of the Java Virtual Machine are encouraged to avoid splitting 64-bit values where possible. Programmers are encouraged to declare shared 64-bit values as volatile or synchronize their programs correctly to avoid possible complications. " This specifically is a problem when multiple threads read or update a shared variable. One thread may update the first 32-bit value and before updating the last 32-bit, another thread may pick up the immediate value, resulting in an unreliable and inconsistent read operation. This is the problem dealing with instructions that are not atomic. However, there is a way out from long and double variables. Declare it as volatile. Volatile variables are always written into and read from main memory. They are never cached. That is the reason it is as follows: Or, synchronize getter/setter: Or, use AtomicLong from java.util.concurrent.atomic package, as shown here: Synchronization between thread communications is another issue that can be quite messy unless handled carefully. Java, however, provides multiple ways to establish communication between threads. Synchronization is one of the most basic mechanisms among them. It uses monitors to ensure that shared variable access is mutually exclusive. Any competing thread must go through lock/unlock procedures to get an access. On entering a synchronized block, the values of all variables in the working memory are reloaded from the main memory and writes back as soon as it leaves the block. This ensures that, once the thread is done with the variable, it leaves it in the memory so that some other thread can access it soon after the first thread is done. There are two types of threads synchronizations built into Java: A critical section in a code is designated with reference to an object's monitor. A thread must acquire the object's monitor before executing the critical section of code. To achieve this, a synchronized keyword can be used in two ways: Either declare a method as a critical section. For example, Or, create a critical section block. For example, JVM handles the responsibility of acquiring and releasing an object monitor's lock. The use of a synchronized keyword simply designates a block or method to be critical. Before entering the designated block, a thread first acquires the monitor lock of the object and releases it as soon as its job is done. There is no limit on how many times a thread can acquire an object monitor's lock, but must release it for another thread to acquire the same object's monitor lock. This article tried to give a perspective of what Java thread means in one of its many aspects, yet a very rudimentary explanation omitting many details. Thread in Java programming construction is very deeply associated with Java Memory Model, especially, on how its implementation is handled by JVM behind the scene. Perhaps the most valuable literature to understand the idea is to go through the Java Language Specification and Java Virtual Machine Specification. They are available in both HTML and PDF format. Interested readers may go through them to get a more elaborate idea. 2016-07-18 00:00 Manoj Debnath

51 The Key to AI Automation: Human-Machine Interfaces The 4 th industrial revolution is undoubtedly artificial intelligence systems and the future is definitely here, even though it doesn't look like an episode of "The Jetsons" or "The Terminator" just yet. The current generation of artificial intelligence technology is most effective in the capacity of augmenting human intelligence. This augmentation requires new thinking interaction between machine intelligence and the humans who work with the machine intelligence. Virtually every operation a bank does takes a set of data as an input; some sort of judgment is performed, and then the execution of the action is done digitally. If a customer makes an address change request, a risk judgment is made; then, the address change is done. A loan origination is a series of provided data sets, judgments, requests for more data, then the composition of digital documents to be executed at closing. Machine learning algorithms such as AzureML or Amazon Machine Learning can take in data sets and observe the outcome scores or judgments that were made and produce a model that can predict the outcome. These learning judgments can be defined outcomes such as whether a loan is performed or to observe the judgments made by people to reproduce the same judgments on future data sets. Other artificial intelligence products—like Microsoft's , Google DeepMind, or IBM —can work on more free-form problems. These types of systems are well adapted to interfacing with people and translating a chaotic world into a series of more solvable problems. The state of the current technology isn't perfect, as Microsoft recently demonstrated when its Tay AI went on a racist genocidal rant on Twitter. Since this incident, movies like Terminator can be interpreted in a new way. What if the first AI does become self-aware and learns about humanity from reading YouTube comments? Microsoft learned their lesson with Tay and made adjustments to more closely monitor and adjust how the machine is learning to avoid instances like this happening in the future. Machine learning models are more advanced compared to previous models because they can be evolved over time by re-testing the model and making adjustments as data changes over time. The challenge this can present to a bank is if the model starts to evolve in an unintended direction. Car salesmen could learn how you underwrite automotive loans and stretch their client's applications. Fraudsters could observe how you're detecting fraud and adjust what they are doing. With the current state of AI and Machine Learning, human supervision is absolutely necessary. Is your evolving AI bot about to go on a racist rampage? Has your adaptive machine learning algorithm adjusted to become better at underwriting risk of normal borrowers but become more vulnerable to fraudsters? 20 years from now, it is highly likely there will be cars on the road that don't have a steering wheel and will be able to dutifully get you to your destination more safely than any human driver could deliver. That day isn't today… but what is available right now is Tesla's autopilot feature. Using the autopilot feature is a terrifying experience at first, especially if you start using it on non-interstate highway roads. As a seasoned driver, the idea of giving up control of the steering, acceleration, and braking is nerve wracking. For a driver to successfully operate this highly sophisticated computer controlled system, the driver needs to understand what the machine knows and what it doesn't. Fortunately, the designers at Tesla understand this and provide a helpful heads-up display that shows what lines it sees on the road, what other cars it is aware of that are around you, and highlights the elements that it is looking at to decide where to go. Sometimes, it highlights the lines on the road to show that it is following the lines; other times, it highlights the car it is following. This feedback between the human driving the car and the autopilot system is absolutely essential for the hybrid human/artificial intelligence state we are now in. The software is showing you what it is thinking and giving you clues about what it is going to do next before it does it. Tesla's autopilot builds driver confidence and makes the interaction natural by letting the human operator know what it is doing and why. Would you trust a robot to hold all of your money and just trust it knew what it was doing? Virtually every wealth management institution has either built or licensed a robo-advisor platform. The core of these platforms typically employ the same basic strategy of rotating ETFs to manage exposure to different classes of investments for diversification purposes and employ a tax loss harvesting strategy. Some robo-advisors are opaque and come across looking like a single account and leave you to trust that it knows what it's doing while it shows you how it is performing relative to its benchmarks. This would be like a self- driving car with a single green light that says, "Trust me, we're not about to dive into oncoming traffic. " Even if the technology is perfect, are you going to blindly trust it without some kind of reassurance that it knows what it is doing? The better robo-advisors provide detailed interfaces that describe the trades they are performing and why they are performing those trades. Is this ETF being sold solely to re-balance domestic versus foreign equities? Is this ETF being sold with the intent to buy another ETF to employ a tax loss harvesting? The explanation and visibility into what the robo-advisor is doing is key to building client confidence in the software and helping them understand the value they are receiving from using the software. Over time, clients will learn to trust the software and understand the value it brings them. With this trust, they won't need to check it as often but anytime they need an explanation it is there. Banks of the digital era can no longer afford to have humans be the first line of defense in identifying risks and fraud. In earlier eras, banks could have a trained professional review each transfer and make a decision as to whether or not they believe the transaction is risky. As the transaction volume went up, the possibility of having a human review every transaction became impossible but led to the rise of intelligent transaction analysis systems that could identify, in real-time transactions, not matching the typical behavior of the account holder. Many banks utilize statistical models to perform real-time underwriting for loans of various types. These systems utilize sophisticated models to look at previously approved loans and determines what kinds of loans will perform and the ones that won't. These systems typically yield a single number that describes the risk of the loan and, based on risk tolerance, approves the loan or sends it to underwriting as a "soft decline" until a human underwriter can review the loan application and decide whether or not to approve the loan. There is an entirely new generation of artificial intelligence tools that can be used to tackle problems of this nature, including Azure Machine Learning, Amazon Machine Learning, or IBM's Watson. Similar to previous generation statistical models, these systems typically yield some kind of number that, in the credit risk scenario, would equate to the probability of the loan becoming a non-performing loan. This is where the challenge comes in. If declined loans are sent to a human underwriter for additional review, those humans need a clear explanation as to what factors concerned the machine learning model. With an explanation, they can quickly zero in on what might require further clarification. Without an explanation, they are left with reviewing every detail by hand. Machine learning models can be deployed in a continuous learning state where the model can be re-trained on new data. As the model is re-trained, it can result in new behavior. Although this adaptability to changing conditions is a major benefit of the technology, it needs to be monitored by people who can identify emergent bad behavior. Artificial Intelligence and Machine Learning technology are able to automate virtually every operation a bank can perform today. This power doesn't come free and it will require technology resources from your organization to integrate it into existing systems and replace the work that people are performing today. As this technology becomes central to your organization, it is absolutely critical that you are able to understand what the automation systems are doing and that those automation systems are clearly articulating to the humans that interface with them how they are making their decisions. Excellent human interfaces are key to unlocking the power of the 4 th industrial revolution in your company! David Talbot is the director of Architecture & Strategy at a leading digital bank. He has almost two decades of innovation across many startups and has written extensively on technology. 2016-07-18 00:00 David Talbot

52 The Value of Doing APIs Right: A Look at the SiriKit API Demoware When Siri was first introduced, people thought it was much smarter than it actually is. I heard kids giggling for hours, asking it silly questions. In effect, Siri was good for executing Web searches by voice and giving sassy answers to questions about itself. Neat trick, but not very sophisticated. After a few months, most people quit using Siri because, honestly, it just wasn't that practically useful. The Amazon Echo was widely mocked when it was introduced. Who is going to pay $200 for a speaker? It became a surprise smash hit, not because people needed another speaker but because it had an extensible API that allowed 3 rd party developers to code new capabilities for it. It quickly found multiple unserved niches, particularly in home automation. "Alexa, turn off the lights. " People who own Echos almost universally say they use it every single day and find it has become an integral part of their experience at home. The core difference between these two experiences is the existence of an API. The Echo has thousands of 3 rd party developers thinking up new ideas for the platform and teaching it new skills, and Siri has Apple. A 3 rd party developer who wants to make their app work with Siri has no option other than to index their app and hope it comes up as a search result on a Siri voice search. There was a brief glimmer of hope recently when Apple introduced SiriKit. Finally, Apple was going to make it possible for 3 rd party developers to integrate their apps with Siri! Not so fast, enterprising developers… SiriKit only supports about a dozen canned interactions. They support Ride Booking (for example, book an Uber), person to person payments (Send $20 to a friend on Venmo), starting and stopping a workout, and some basic carplay commands. Although this is some progress, this canned set of actions merely opens up a handful of possibilities for Siri. Apple is still a first-class citizen when it comes to integrating their own apps with Siri and the 3 rd party marketplace is relegated to 3 rd class citizens in steerage. Many of the limitations on integration with virtual assistants boils down to privacy concerns. Google Now reads all of my Gmail messages to provide me with helpful information. I don't want every app I install on my phone to start reading my email, too. As a result of these privacy concerns, the better virtual APIs are currently limited to being able to register your app for action commands. Google Voice Actions, Cortana, and Amazon all allow you to define phrases that your application can execute on. This is a good start and it allows for a reasonable level of integration with these platforms. Being able to register for context is half of the battle. The platforms with action APIs will allow you to register for a command like, "Send flowers to Mom," and activate your flower ordering app. The problem is that the app doesn't know who your Mom is even though Google does. The user's intent in this case is clearly to share your mother's name and address with the flower ordering app. To make virtual assistants truly useful for end-users, these platforms need a way to integrate with 3 rd party applications that include context without putting people's data at risk. I would propose that this could be done by allowing apps a richer method of registering not only the action commands they can respond to but the context they need to deliver on the user's action. For example, you could register your car insurance company as subscribed to topics about insurance, cars, and household budgeting. Within each of these topics, you would need to define the moments in natural language terms, like "If the user is in a car accident" would define the broad topic areas that are relevant to your application. If these topic areas are triggered, the virtual assistant platform could pass a pre-defined set of context information that is relevant to this experience, such as the type of car being considered for purchase. Within these topics, your application could define its more specific actions that it can handle using that general context. Air bags deployed, insurance assistant can proactively pipe in and ask if you'd like a claims agent to meet you or, in the case of Google Now, put a card at the top of your list with a button to summon an insurance agent. Real magic can happen if virtual assistants can start allowing 3 rd parties to collaborate together to deliver more value to the customer. For example, in a household budgeting scenario, multiple apps could collaborate to provide more information than any one company could do by themselves. For example, your bank, credit card company, wealth advisor, insurance, cable, telephone, and so forth, all have a piece of your household's budgetary picture. The problem then arises with making all of these companies behave more in the interest of the user than themselves. Each company is incented to push themselves to the forefront. The insurance company wants to sell car insurance, the wealth management company wants you to put more money under their management, and the cable company wants you to expand your channel line-up. If you asked your assistant to help you understand your budget, each of these providers screaming at you to sign up for more services would hardly be helpful. As a result of the need to drive this collaboration, virtual platforms will need to evolve to allow 3 rd party applications to describe the services they can perform in a situation like this. The virtual assistant can provide the appropriate context and the 3 rd party application can describe what they can do for that context. The virtual assistant then will need to make the decision as to which of the various 3 rd party applications has the most relevant input to the current need. To create a true virtual assistant platform that can unlock the power of the entire marketplace, 3 rd party applications need: This could potentially require more abstract reasoning than is available under the hood of the simpler assistants like Siri currently can muster. The more advanced recognition systems like Watson would have no trouble assembling these pieces. It's past time to open up virtual assistant APIs. New entrants like Viv are going to eat the lunch of these closed platforms. Truly open APIs allow a marketplace of innovation that is broader than a dozen canned possibilities to create amazing, surprising, and memorable experiences. 2016-07-18 00:00 David Talbot

53 What Is Jenkins? If you never heard about Jenkins , or it is just something that you didn't understand exactly what is it useful for, this article is for you. In the next few minutes, we will have an overview of Jenkins meant to introduce you this comprehensive tool dedicated to automating any kind of project. Basically, Jenkins is an open source project written in Java and dedicated to sustaining continuous integration practices ( CI ). The tasks that Jenkins can solve are related to project automation, or, more exactly, Jenkins is fully able to automate build, test, and integrate our projects. For example, in this article you will see how to chain GitHub->Jenkins->Payara Server to obtain a simple CI environment for a Hello World Spring-based application (don't worry, you don't need to know Spring). So, let's delve a little in the Jenkins goals. We begin with the installation of Jenkins 2, continue with major settings/configurations, install specific plug- ins, and finish with a quick start example of automating a Java Web application. In this article, we will assume the following: To download Jenkins, simply access the official Jenkins Web site (https://jenkins.io/) and press the button labeled Download Jenkins , as seen in Figure 1: Figure 1: Download Jenkins We go for the weekly release, which is listed oin the right side. Simply expand the menu button from Figure 1 and choose the distribution compatible with your system (OS) and needs. For example, we will choose to install Jenkins under Windows 7 (64-bit), as you can see from Figure 2: Figure 2: Select a distribution compatible with the system Notice that, even if the name is 2.5.war , for Windows we will download a specific installer. After download, you should obtain a ZIP archive named jenkins-2.5.zip. Simply un-zip this archive in a convenient location on your computer. You should see a MSI file named jenkins.msi. So, double-click this file to proceed with the very simple installation steps. Basically, the installation should go pretty smoothly and should be quite intuitive; we installed Jenkins in the D:\jenkins 2.5 folder. At the end, Jenkins will be automatically configured as a Windows service and will be listed in the Services application, as in Figure 3: Figure 3: Jenkins as a Windows service Beside setting Jenkins as a service, you will notice that the default browser was automatically started, as shown in Figure 4: Figure 4: Unlock Jenkins Well, this is the self-explanatory login page of Jenkins, so simply act accordingly to unlock Jenkins. In our case, the initialAdminPassword was 9d9f510d8ef043e98f7c574b3ea8adc0. Don't bother about typing this password; simply use copy-paste. After you click the Continue button, you can see the page from Figure 5: Figure 5: Install Jenkins plug-ins Because we are using Jenkins for the first time, we prefer to go with the default set of plug-ins. Later on, we can install more plug-ins, so you don't have to worry that you didn't install a specific plug-in at this step. Notice that installing suggested plug-ins may take a while, depending on your Internet connection (network latency), so be patient and wait for Jenkins to finish this job for you. While this job is in progress, you should see a verbose monitoring that reveals the progress status, plug-ins names, and the dependencies downloaded for those plug-ins. See Figure 6: Figure 6: Monitoring plug-ins installation progress You can use this time to spot some commonly used plug-ins, such as Git, Gradle, Pipeline, Ant, and so forth. After this job is done, it is time to set an admin user of Jenkins. You need to have at least an admin, so fill up the requested information accordingly (Figure 7): Figure 7: Create the first Jenkins admin If you press Continue as admin , Jenkins will automatically log you in with these credentials and you will see the Jenkins dashboard. If you press the Save and Finish button, you will not be logged in automatically and you will see the page from Figure 8: Figure 8: Start using Jenkins If you choose Save and Finish (or whenever you are not logged in), you will be prompted to log in via a simple form, as in Figure 9: Figure 9: Log in to Jenkins as admin After login, you should see the Jenkins dashboard, as in Figure 10: Figure 10: Jenkins dashboard So far, you have successfully downloaded, installed, and started Jenkins. Let's go farther and see several useful and common configurations. To work as expected, Jenkins needs a home directory and implicitly some disk space. In Windows (on a 64-bit machine), by default, the Jenkins home directory ( JENKINS_HOME ) is the place where you have installed Jenkins. In our case, this is D:\jenkins 2.5. If you take a quick look into this folder, you will notice several sub-folders and files, such as the /jobs folder, used for storing jobs configurations; a /plugins folder, used for storing installed plug- ins or the jenkins.xml file containing some Jenkins configurations. So, in this folder, you will store Jenkins stores plug-ins, jobs, workspace, users, and so on. Now, let's suppose that we want to modify the Jenkins folder from D:\jenkins 2.5 in C:\JenkinsData. To accomplish this task, we need to follow several steps: By default, Jenkins will start on port 8080. In case that you are using this port for another application (for example, application servers as Payara, Wildfly, and the like), you will want to manually set another port for Jenkins. This can be accomplished by following these steps: By default, Jenkins will use 256MB, as you can see in jenkins.xml. To allocate more memory, simply adjust the corresponding argument. For example, let's give ir 8192MB: You also may want to adjust the perm zone or other JVM memory characteristics by adding more arguments: Please find more Jenkins parameters here. Winstone is part of Jenkins; therefore, you can take advantage of settings such as --handlerCountStartup (set the number of worker threads to spawn at startup; default, 5) or --handlerCountMax (set the max number of worker threads to allow; default,d 300). Remember that when we have installed Jenkins we chosen the default set of plug-ins. Moreover, remember that we said that Jenkins allows us to install more plug-ins later from the dashboard. Well, it is time to see how to deal with Jenkins plug-ins. To see what plug-ins are installed in your Jenkins instance, simply select the Manage Jenkins | Administration plugins | Installed tab. See Figure 15: Figure 15: See the installed plug-ins Installing a new plug-in is pretty simple. Select the Manage Jenkins | Administration plugins | Available tab. Locate the desired plug-in(s) (notice that Jenkins will provide a huge list of plug-ins, so you better use the search filter feature), tick the desired plug-in(s), and click one of the available options listed at the bottom of the page. Jenkins will do the rest for you. See Figure 16: Figure 16: Install a new plug-in For example, later in this article we will need to instruct Jenkins to deploy the application WAR on a Payara Server. To accomplish this, we can install a plug-in named Deploy Plugin. So, in the Available tab, we have used the filter feature and typed deploy. This will bring us, on screen, the plug-in as in Figure 17. (If you don't use the filter, you will have to manually search through hundreds of available plug-ins, which will be time-consuming.) Therefore, simply tick it and install it without restart: Figure 17: Install Deploy plug-in After installation, this plug-in will be listed under the Installed tab. Before defining a job for Jenkins, it is a good practice to take a look to the global tool configuration ( Manage Jenkins | Global Tool Configuration ). Depending on what types of jobs you want to run, Jenkins needs to know where to find additional tools, such as JDK, Git, Gradle, Ant, Maven, and so forth. Each of these tools can be installed automatically by Jenkins once you tick the Install automatically checkbox. For example, in Figure 18, you can see that Jenkins will install Maven automatically: Figure 18: Install Maven under Jenkins But, if you already have Maven installed locally, you can un-tick the Install automatically checkbox and instruct Jenkins where to find Maven locally via the MAVEN_HOME environment variable. Either way, you have to specify a name to this Maven installation. For example, type Maven as the name and keep this in mind because you will need it later. Each tool can be installed automatically, or you can simply instruct Jenkins where to find it locally via environment variables (for JDK, JAVA_HOME ; for Git, GIT_HOME ; for Gradle, GRADLE_HOME ; for Ant, ANT_HOME , and for Maven, MAVEN_HOME ). Moreover, each tool needs a name used to identify it and refer it later when you start defining jobs. This is useful when you have multiple installation of the same tool. In case that a required variable is not available, Jenkins will show this via an error message. For example, let's say that we decided to instruct Jenkins to use the local Git distribution. But, we don't have GIT_HOME set, so here it is what Jenkins will report: Figure 19: Install Git under Jenkins This means that we need to set GIT_HOME accordingly or choose the Install automatically option. Once you set GIT_HOME , the error will disappear. So, before assigning jobs to Jenkins, take your time and ensure that you have successfully accomplished global tool configuration. This is a very important aspect! Because this is the first Jenkins job, we will keep it very simple. Practically, what we will do is to implement a simple CI project for a Hello World Spring application. This application is available here. Don't worry if you don't know Spring; it is not mandatory! Furthermore, you have to link the repository (you can fork to this repository) to your favorite IDE (for example, NetBeans, Eclipse, and so on) in such a way that you can easily push changes to GitHub. How you can accomplish this is beyond this article's goal, but if you choose NetBeans, you can find the instructions here. So, we are supposing that you have Jenkins installed/configured and the application is opened in your favorite IDE and linked to GitHub. The next thing to do is to install Payara Server with its default settings and start it. By default, it should start on port 8080 with admin capabilities on port 4848. Our next goal is to obtain the following automation: at each three-minute interval, Jenkins will take the code from GitHub, compile it, and the resulted WAR will be deployed on Payara Server. Open Jenkins in a browser and click New Item or Create a new job , as in Figure 20: Figure 20: Create a new job in Jenkins As you will see, there are several types of jobs (projects) available. We will choose the most popular one, which is freestyle project and we will name it HelloSpring : Figure 21: Select a job type and name it After you press the Ok button, Jenkins will open the configuration panel for this type of job. First, we will provide a simple description of the project, as in Figure 22 (this is optional): Figure 22: Describe your new job Because this is a project hosted on GitHub, we need to inform Jenkins about its location. For this, on the General tab, tick the GitHub project checkbox and provide the project URL (without the tree/master or tree/branch part): Figure 23: Set the project URL The next step consists of configuring the Git repository that contains our application in the Source Code Management tab. This means that we have to tick the Git checkbox and specify the repository URL, the credentials used for access, and the branches to build, as in Figure 24: Figure 24: Configure Git repository Further, let's focus on the Build Triggers tab. As you can see, Jenkins provides several options for choosing the moment when the application should be built. Most probably, you will want to choose the Build when a change is pushed to GitHub option, but for this we need to have a Jenkins instance visible on the Internet. This is needed for GitHub, which will use a webhook to inform Jenkins whenever a new commit is available. You also may go for the Poll SCM option, which will periodically check for changes before triggering any build. Only when changes to the previous version are detected, the build will be triggered. But, for now, we go for the Build periodically option, which will build the project periodically without checking for changes. We set this cron service to run at every three minutes: Figure 25: Build project periodically The schedule can be configured based on the instructions provided by Jenkins if you press the little question mark icon listed in the right of the Schedule section. By the way, don't hesitate to use those question marks whenever they are available because they provide really useful information. To build the project, Jenkins need to know how to do it. Our application is a simple Maven Web application and pom.xml is in the root of the application. So, on the Build tab, select the Invoke top-level Maven targets option from the Add build step drop-box. Furthermore, instruct Jenkins about the Maven distribution (remember that we have configured a Maven instance under the name Maven earlier in the Global Tool Configuration section) and about the goals you want to be executed (for example, clean and package ): Figure 26: Configure Maven distribution and goals So far, so good! Finally, if the application is successfully built, we want to delegate Jenkins to deploy it on Payara Server (remember that we have installed the Deploy Plugin earlier, especially for this task). This is a post- build action that can be configured on the Post-Build Actions tab. From the Add post-build action drop-box, select the Deploy war/ear to a container item. Figure 27: Add a post-build action This will open a dedicated wizard where we have to configure at least the Payara Server location and the credentials for accessing it: Figure 28: Configure Payara Server for deployment Click the Save button and everything is done. Jenkins will report the new job on the dashboard: Figure 29: The job was set Now, you can try to fire a manual build or wait for the cron to run the build every three minutes. For a manual build, simply click the project name from Figure 29, and on Build Now , as in Figure 30: Figure 30: Running a build now Each build is listed under the Build History section and can be in one of three stages: in progress, success, or failure. You easily can identify the status of your builds: Figure 31: Build status Most probably, if the build failed, you want to see what just happened. For this, simply click the specific build and afterwards on Console Output , as in Figure 32: Figure 32: Check build output All you have to do is to provide access for writing in the C:\Windows\Temp folder via the Properties wizard: Figure 34: Providing access for writing to the folder If the build is successfully accomplished, the application is deployed on Payara Server and it is available on the Applications tab of the admin console. From there, you easily can launch it, as in Figure 35: Figure 35: Run the application It looks like our small project works like a charm! Further, do some modifications in the application, push it on GitHub, wait for the Jenkins cron to run, and notice how the modifications are reflected in your browser after refresh. Well, to further the project, you can try to add more functionalities, like a JIRA account, GitHub webhook, and the like. 2016-07-18 00:00 Leonard Anghel

54 Top 10 Reasons to Get Started with React. JS By Andrew Allbright React is a popular framework used by most large enterprise ventures and by small lone developers to create views with complicated relationships in a modular fashion. It provides just enough structure to allow for flexibility yet enough railing to avoid common pitfalls when creating applications for the Web. In the style of a top 10 list, I will describe reasons why you should choose this framework for your next project. One of the reasons why React became so popular was due to its video game-inspired rendering system. The basics of its system is around minimizing DOM interactions by batching updates, using a virtual memory DOM to calculate differences, and immutable state. One thing to note is that this approach was counter to other the trends of other JavaScript frameworks at the time. Angular 1, Ember, Knockout, and even jQuery were concerned with data binding to elements on the page. However, it turns out that dirty checking two-way data bindings produces exponentially more calculations as you add more elements into the mix than one way. Angular 2 has since abandoned dirty checking and two-way bindings for a more React-like approach. The short list of lifecycle methods make this framework one of the easiest to understand. In fact, it wouldn't be unheard of to become proficient with this entire library in under a day. This can be attributed to the "always rerender" nature of each view and how it accommodates state or property changes to its view. To emphasize this point, look at what all you need to define a simple React component... Your render function lends itself to terser, more immutable, functional programming that has become trendy in the JavaScript community with ES2015, ES2016. It may seem obvious today, but when React. JS was initially introduced into the JavaScript world at the time the idea of tightly coupling your view definition with the logic that controls, it was controversial. React released into a paradigm where client-side copies of traditional MVC frameworks, like those found on the server-side, were very popular. MVC traditional separates the HTML from controllers whose responsibilities were to combine multiple views and marshal data into them. That literally means these "concerns" were separated into their own files. The architects of React took another approach; they say the separation of HTML from JavaScript is superficial. Indeed, your HTML and JS application code were very tightly coupled, and keeping them in their own separation files was more a separation of technologies than separation of concerns. Imagine trying to change class names or id tags of HTML elements in a large jQuery application. You would have to verify that none of your DOM bindings were destroyed, suggesting a close relationship between the two. That's where JSX comes into the mix. By putting your component logic within the same file as the view it is operating on, it makes the module easier to reason about and the best part is you can leverage vanilla JavaScript to express your view. React is a library that defines your view but gives you lifecycle "hooks" to make server-side requests. This is an advantage because it means once you understand how XHR requests are made, you can more easily update what library you use to make these than, say, BackBoneJS. These hooks are state , props , componentWillMount , and componentDidMount (if you want to wait until late in the game). How you organize multiple different XHR interactions is largely up to you. Common patterns include the one I've just described, Flux or Redux. Although React is curated by the developers at Facebook, it is very much a community-driven library. Viewing the GitHub issue trackers and PR, you get a sense that the developers deputizing themselves to maintain this framework find a joy in sharing code and getting into sometimes heated debate. This is an advantage for your project because you can ensure you will get code that has been vetted by passionate developers. In fact, communities trends inspire the architects as much as Facebook inspire the community. Redux has all but taken over Flux as a collection of libraries to create larger scale applications and this was created by someone for a conference demo. Facebook haS since embraced it as one of the best options for developers to get started with. This is not a unique attribute for most JavaScript frameworks, but React is one of the more popular libraries that is written in pure JavaScript. Plus, it's always fun to see who has been recognized when Facebook puts up its release notes. Large companies like Facebook, Netflix, and Walmart have embraced React as their library of choice for handling view related tasks. This vote of confidence is no accident. React has a neat feature where it can detect whether or not it needs to initially render the DOM onto the page. That means if you precompiled the view in your server-side code before delivering to the client's browser, React would be able to simply bootstrap its listeners and go from there. React provides the means to generate HTML from its syntax easily. This was intentional to gain favor with SEO bots, which traditionally don't run JavaScript in their crawlers (or at least mark those sites worse than pregenerated ones). Compared to other frameworks, React's 43.2 KB is a good size for what you get. For comparision: Angular 2's minified size is 125 KB, Ember is 113 KB, although Knockout 3.4.0 is 21.9 KB and jQuery 3.0 is 29.8 KB. React's ecosystem is vast indeed. The way the framework has been moving is towards separating view logic from "purer" business rules. By default, you adopt this strategy. This allows you to target other platforms, such as mobile, Virtual Reality devices, TV experiences, or even to generate email. The reason you should choose React for your next project is due to its lifecycle methods, state, and props that provide just enough railing to create scalable applications but not enough to stifle liberal use of different libraries. Need XHR data? Use componentWillMount. Need to make a particular component look pretty using a well-known jQuery library? Well, use componentDidMount with componentShouldUpdate or componentDidUpdate to stop DOM manipulations or restyle the element after changes easily. The point is there is just enough railing that correspond to natural component life cycles within the page to make a great deal of sense to developers of any experience level but not enough to where there is a "React" way of doing things. It is very versatile in that way. Now that you've read this list, I hope I've inspired you to find a React boilerplate repo and get started on a new project. React is fun to work with and, as I've laid out, there are so many reasons why you should choose this framework over others. 2016-07-18 00:00 Andrew Allbright

55 Tips for MongoDB WiredTiger Performance Tuning By Dharshan Rangegowda , founder of ScaleGrid.io. MongoDB 3.0 introduced the concept of pluggable storage engines. Currently, there are a number of storage engines available for Mongo: MMAPV1, WiredTiger, MongoRocks, TokuSE, and so forth. Each engine has its own strengths and you can select the right engine based on the performance needs and characteristics of your application. Starting with MongoDB 3.2.x, WiredTiger is the default storage engine. WiredTiger is the most popular storage engine for MongoDB and marks a significant improvement over the existing default MMAPv1 storage engine in the following areas: In the rest of this article, I'll present some of the parameters you can tune to optimize the performance of WiredTiger on your server. The size of the cache is the single most important knob for WiredTiger. By default, MongoDB 3.x reserves 50% (60% in 3.2) of the available memory for its data cache. Although the default works for most applications, it is worthwhile to try tuning this number to achieve the best possible performance for your application. The size of the cache should be big enough to hold the working set of your application. Figure 1: The WiredTiger cache size MongoDB also needs additional memory outside of this cache for aggregations, sorting, connection management, and the like, so it is important to make sure you leave MongoDB with enough memory to do its work. If not, there is a chance MongoDB can get killed by the OS Out of memory (OOM) killer. The first step is to understand the usage of your cache with the default settings. Use the following command to get your cache usage statistics: Here is an example of output from calling the WiredTiger cache command: The first number to look at is the percentage of the cache that is dirty. If the percentage is high, increasing your cache size might improve your performance. If your application is read heavy, you can also track the "bytes read into cache" parameter. If this parameter remains constantly high, increasing your cache size might improve your read performance. The cache size can be changed dynamically without restarting the server by using the following command: If you would like the custom cache size to be persistent across reboots, you also can add the config instruction to the conf file: Figure 2: Read and write tickets WiredTiger uses tickets to control the number of read/write operations simultaneously processed by the storage engine. The default value is 128 and works well for most cases. If the number of tickets falls to 0, all subsequent operations are queued, waiting for tickets. Long-running operations might cause the number of tickets available to decrease, reducing the concurrency of your system. For example, if your read tickets are decreasing, there is a good chance that there are a number of long running unindexed operations. If you would like to find out which operations are slow, there are third-party tools available. You can tune your tickets up/down depending on the needs of your system and determine the performance impact. You can check the usage of your tickets by using the following command: Here is a sample output You can change the number of read & write tickets dynamically without restarting your server by using the following commands: Once you've made your changes, monitor the performance of your system to ensure that it has the desired effect. Dharshan Rangegowda is the founder of ScaleGrid.io, where he leads products such as ScaleGrid, a MongoDB hosting and management solution to manage the lifecycle of MongoDB on public and private clouds, and Slow Query Analyzer, a solution for finding slow operations within MongoDB. He can be reached at @dharshanrg. *** This article was contributed *** 2016-07-18 00:00 www.developer

56 Serverless Architectures on AWS: Monitoring Costs By Peter Sbarski with Sam Kroonenburg for Manning Publishing This article was excerpted from the book Serverless Architectures on AWS. CloudWatch is an AWS component for monitoring resources and services running on AWS, setting alarms based on a wide range of metrics, and viewing statistics on the performance of your resources. When you begin to build your serverless system, you are likely to use logging more than any other feature of CloudWatch. It will help to track and debug issues in Lambda functions, and it's likely that you will rely it on for some time. Its other features, however, will become important as your system matures and goes to production. You will use CloudWatch to track metrics and set alarms for unexpected events. Receiving an unpleasant surprise in the form of a large bill at the end of the month is disappointing and stressful. CloudWatch can create billing alarms that send notifications if total charges for the month exceed a predefined threshold. This is useful, not only to avoid unexpectedly large bills, but also to catch potential misconfigurations of your system. For example, it is easy to misconfigure a Lambda function and inadvertently allocate 1.5GB of RAM to it. The function might not do anything useful except wait for 15 seconds to receive a response from a database. In a very heavy-duty environment, the system might perform 2 million invocations of the function a month costing a little over $743.00. The same function with 128MB of RAM would cost around $56.00 per month. If you perform cost calculations up front and have a sensible billing alarm, you will quickly realize that something is going on when billing alerts begin to come through. Follow these steps to create a billing alert: Figure 1: The preferences page allows you to manage how invoices and billing reports are received. Figure 2: It's good practice to create multiple billing alarms to keep you informed of ongoing costs. Services such as CloudCheckr can help to track costs, send alerts, and even suggest savings by analyzing services and resources in use. CloudCheckr comprises several different AWS services, including S3, CloudSearch, SES, SNS, and DynamoDB (figure 3). It is richer in features and easier to use than some of the standard AWS features. It is worth considering for its recommendations and daily notifications. Figure 3: CloudCheckr is useful for identifying improvements to your system but the good features are not free. AWS also has a service called Trusted Advisor that suggests improvements to performance, fault tolerance, security, and cost optimization. Unfortunately, the free version of Trusted Advisor is limited, so if you want to explore all of the features and recommendations it has to offer, you must upgrade to a paid monthly plan or access it through an AWS enterprise account. Cost Explorer (figure 4) is a useful, albeit high-level, reporting and analytics tool built in to AWS. You must activate it first by clicking your name (or the IAM user name) in the top right-hand corner of the AWS console, selecting Cost Explorer for the navigation pane, and then enabling it. Cost Explorer analyzes your costs for the current month and the past four months. It then creates a forecast for the next three months. Initially, you may not see any information, because it takes 24 hours for AWS to process data for the current month. Processing data for previous months make take even longer. More information about the Cost Explorer is available at http://amzn.to/1KvN0g2 . Figure 4: The Cost Explorer tool allows you to review historical costs and estimate what future costs may be. The Simple Monthly Calculator is a web application developed by Amazon to help model costs for many of its services. This tool allows you to select a service on the left side of the console and then enter information related to the consumption of that particular resource to get an indicative cost. Figure 5 shows a snippet of the Simple Monthly Calculator with an estimated monthly cost of $650.00. That estimate is mainly of costs for S3, CloudFront, and the AWS support plan. It is a complex tool and it's not without usability issues, but it can help with estimates. You can click common customer samples on the right side of the console or enter your own values to see estimates. If you take the Media Application customer sample, something that could serve as a model for 24-Hour Video , it breaks down as follows: Figure 5: The Simple Monthly Calculator is a great tool to work out the estimated costs in advance. You can use these estimates to create billing alarms at a later stage. The cost of running serverless architecture often can be a lot less than running traditional infrastructure. Naturally, the cost of each service you might use will be different, but we can have a look at what it takes to run a serverless system with Lambda and the API Gateway. Amazon's pricing for Lambda is based on the amount of requests, duration of execution, and the amount of memory allocated to the function. The first one million requests are free, with each subsequent million charged at $0.20. Duration is based on how long the function takes to execute, rounded up to the nearest 100ms. Amazon charges in 100ms increments while also taking into account the amount of memory reserved for the function. A function created with 1GB of memory will cost $0.000001667 per 100ms of execution time, whereas a function created with 128MB of memory will cost $0.000000208 per 100ms. Note that Amazon prices may differ depending on the region and that they are subject to change at any time. Amazon provides a perpetual free tier with 1 million free requests and 400,000 GB-seconds of compute time per month. This means that a user can perform a million requests and spend an equivalent of 400,000 seconds running a function created with 1GB of memory before they have to pay. As an example, consider a scenario where you have to run a 256MB function five million times a month. The function executes for two seconds each time. The cost calculation follows: The total cost of running Lambda in the above example is $35.807. The API Gateway pricing is based on the number of API calls received and the amount of data transferred out of AWS. In US East, Amazon charges $3.50 for each million API calls received and $0.09/GB for the first 10TB transferred out. Given the above example and assuming that monthly outbound data transfer is 100GB a month, the API Gateway pricing is as follows: The API Gateway cost in this example is $26.50. The total cost of Lambda and the API Gateway is $62.307 per month. It's worthwhile to attempt to model how many requests and operations you may have to handle on an ongoing basis. If you expect 2M invocations of a Lambda function that only uses 128MB of memory and runs for a second, you will pay approximately $0.20 month. If you expect 2M invocations of a function with 512MB of RAM that runs for five seconds, you will pay a little more than $75.00. With Lambda, you have an opportunity to assess costs, plan ahead, and pay for only what you actually use. Finally, don't forget to factor in other services such as S3 or SNS, no matter how insignificant their cost may seem to be. 2016-07-18 00:00 www.developer

Total 56 articles. Created at 2016-07-18 18:00