Announcement

49 articles, 2016-07-02 18:00 1 Paid Too Much for LinkedIn Due to Bidding War with Salesforce, Google Redmond is paying $$26.2 billion for LinkedIn 2016-07-02 10:34 2KB (1.02/2) news.softpedia.com 2 Advanced Concepts of Java Object Serialization Probe serialization and its related concepts and learn to delineate (0.03/2) some of its nooks and crannies, along with their implementation in the Java API. 2016-07-02 00:00 7KB www.developer.com 3 Understanding Gradle, the Android Build System Review the of Gradle, the Android Build System. (0.01/2) 2016-07-02 00:00 2KB www.developer.com 4 Using the Executor Framework to Deal with Java Threads Examine the Java core framework and its uses with a little (0.01/2) background idea to begin with. 2016-07-02 00:00 6KB www.developer.com 5 Twilio IPO May Be Key Indicator for Other Unicorns in 2016 NEWS ANALYSIS: A good response from investors June 23 could help determine whether companies such as Dropbox, Uber and others decide to test the waters this year. 2016-07-02 13:36 4KB www.eweek.com 6 Microsoft Streamlines Visual Studio Installation Microsoft is refactoring its Visual Studio installation to be smaller, faster, more reliable and easier to manage. 2016-07-02 13:36 5KB www.eweek.com 7 Eclipse Foundation Ships Neon Release Train The Eclipse Foundation shipped its eleventh annual release train, featuring 84 projects and 69 million lines of code from nearly 800 developers. 2016-07-02 13:36 4KB www.eweek.com 8 Chan Zuckerberg Initiative Selects Andela for First Major Investment Andela, a company that pairs developers in Africa with opportunity in the U. S., has been selected as the first major investment of the Chan Zuckerberg Initiative. 2016-07-02 13:36 4KB www.eweek.com

9 Google Seeks to Spur Kids' Interest in Coding With Project Bloks A Google Research project seeks to build on years of theory and research in the area of tangible programming to interest children in programming at an early age. 2016-07-02 13:36 4KB www.eweek.com 10 N Is For Nougat AKA Android 7.0 Programming book reviews, programming tutorials,programming news, C#, Ruby, Python,C, C++, PHP, , Computer book reviews, computer history, programming history, joomla, theory, spreadsheets and more. 2016-07-02 08:33 3KB www.i-programmer.info 11 Windows 10 Anniversary Update Slated For Aug. 2 Microsoft plans to mark the first year of its latest OS with several new features in a Windows 10 Anniversary Update. 2016-07-02 14:05 4KB www.informationweek.com 12 Hortonworks Commits To Microsoft's Azure Cloud Hadoop distributor Hortonworks announced a deeper partnership with cloud giant Microsoft, a new consortium to create an open source genomics project for precision medicine, and new enterprise features in its Hortonworks Data Platform update at this week's Hadoop Summit. 2016-07-02 11:05 4KB www.informationweek.com 13 10 essential free business apps, tools and services for SMBs and startups These freebies can help take the strain out of running your business 2016-07-02 08:30 8KB www.techradar.com 14 10 Hot Smartphones To Consider Now Although smartphone sales have been on the decline recently, there is no shortage of options. Here are 10 hot models worth a look. 2016-07-02 07:06 3KB www.informationweek.com 15 Microsoft Finally Rolls Out Double Tap to Wake for Lumia 950 and 950 XL New firmware version for Lumia 950/950 XL available via WDRT 2016-07-02 06:37 2KB news.softpedia.com 16 PayPal Announces New Windows Phone App, Says It Won’t Happen Minutes After That “Sorry about that,” PayPal says in new notification 2016-07-02 06:18 2KB news.softpedia.com

17 Microsoft Talks Ubuntu on Windows 10, Offers Video Tutorial How to enable and use Bash on Windows 10 2016-07-02 05:58 2KB news.softpedia.com 18 Slackware 14.2 Officially Released with Linux Kernel 4.4, without systemd Some of the latest GNU/Linux technologies are included 2016-07-02 05:47 2KB news.softpedia.com 19 Four Things Your Business Does That Seems Outdated to Programmers Attracting, hiring, and keeping good employees will be easier if you follow these practices. 2016-07-02 00:00 5KB www.developer.com 20 A Deeper Look: Java Thread Example Become more familiar with some concepts that would aid in better understanding Java threads, eventually leading to better programming. 2016-07-02 00:00 9KB www.developer.com 21 Introducing ASP. NET Core Dependency Injection Become proficient with the DI features of ASP. NET Core 1.0 so that you can quickly use them in your applications. 2016-07-02 00:00 5KB www.developer.com 22 What Is Jenkins? Leap into Jenkins, an open source project written in Java and dedicated to sustaining continuous integration practices. 2016-07-02 00:00 15KB www.developer.com 23 Cross-field Validation in JSF Study a brief overview of three approaches for achieving cross-field validation using JSF core and external libraries. 2016-07-02 00:00 7KB www.developer.com 24 Top 10 Reasons to Get Started with React. JS Study some reasons why you should choose the React. JS framework for your next project. 2016-07-02 00:00 7KB www.developer.com

25 Stream Operations Supported by the Java Streams API Take on the concept of streams from a comparative perspective; we'll illustrate some of its usage in regular Java programming. 2016-07-02 00:00 6KB www.developer.com 26 Exploring the Java String Tokenizer Gain a comprehensive understanding of the background concepts of tokenization and its implementation in Java. 2016-07-02 00:00 5KB www.developer.com 27 Streamline Your Understanding of the Java I/O Stream Learn to streamline your understanding of I/O streams APIs in Java. 2016-07-02 00:00 9KB www.developer.com 28 The Top Ten Ways to Be a Great ScrumMaster Do you lead an Agile team? Here are tips to be more productive. 2016-07-02 00:00 3KB www.developer.com 29 Serverless Architectures on AWS: Monitoring Costs Monitoring your costs is always a big concern. Become better equipped to do so. 2016-07-02 00:00 7KB www.developer.com 30 15 Amazing Mobile Apps for Aspiring Designers Harness the power of technology to create new apps that will captivate your users. 2016-07-02 00:00 10KB www.developer.com 31 Elastic Leadership: Review the Code Team leaders should influence the team in the right direction by changing environmental forces. But getting the team leaders to do this pushing might lead to environmental forces in the first place. 2016-07-02 00:00 6KB www.developer.com 32 Tips for MongoDB WiredTiger Performance Tuning Learn about some of the parameters you can tune to optimize the performance of WiredTiger on your server. 2016-07-02 00:00 4KB www.developer.com 33 Testing Controllers in Laravel with the Service Container Learn how the Laravel controller and Service Container work together and how to leverage the container for testing purposes. 2016-07-02 00:00 6KB www.developer.com

34 MapR Spyglass Initiative Eases Big Data Management Hadoop distributor MapR has introduced new management tools into its Converged Data Platform that make it easier for enterprises to get control over these infrastructure components. 2016-07-02 14:06 3KB www.informationweek.com 35 SMEs move into cyber criminals’ crosshairs The consequences of data breaches for small businesses are greater than ever, with data protection law becoming tougher just as SMEs are turning into a top target for cyber attackers 2016-07-02 14:36 2KB www.computerweekly.com 36 Security Think Tank: Biometrics have key role in multi-factor security How can organisations move to biometric authentication of users without running the risk of exposing sensitive biometric information? 2016-07-02 14:36 1KB www.computerweekly.com 37 Enterprises: Tear Down Your Engineering Silos Will Murrell, a senior network engineer with UNICOM systems, knows a thing or two about silos. UNICOM develops a variety of software and other tools to work with IBM's mainframe, , and Linux. Murrell recently talked with InformationWeek senior editor Sara Peters... 2016-07-02 14:36 1KB www.informationweek.com 38 IBM Adds New Bluemix OpenWhisk Tools for IoT Development IBM added new tools for its Bluemix OpenWhisk serverless computing platform that utilizes Docker. OpenWhisk also features user interface updates. 2016-07-02 13:36 3KB www.eweek.com 39 Eclipse Updates Four IoT Projects, Launches a New One The Eclipse Foundation announced new releases of four open- source IoT projects to accelerate IoT solution development. 2016-07-02 13:36 3KB www.eweek.com 40 Codenvy's Language Server Protocol Reduces Programmer Envy Codenvy, Red Hat and Microsoft collaborate on new language protocol for developers to integrate programming languages across code editors and IDEs. 2016-07-02 13:36 5KB www.eweek.com 41 Box Shuttle Offers Route To Cloud By offering a migration service, Box says it hopes to help enterprises with legacy baggage make a journey toward cloud computing. 2016-07-02 12:06 4KB www.informationweek.com 42 IBM Opens Blockchain-Oriented, Bluemix Garage In NYC This week, IBM added a seventh garage for developers. Big Blue is opening a BlueMix Garage in New York City that will focus on financial services, including the use of blockchain technology. 2016-07-02 11:05 4KB www.informationweek.com 43 7 Days: A week of Windows 10 wonders, block-buster movies, and a nutty name for Android From Tesla tragedy and Snapchat hopes, to Dell ditching Android, Apple's 'dumpster diving', HTC's fishy phones, and Microsoft's endless 'offers', it's our regular roundup of the week's top tech news. 2016-07-02 11:00 17KB feedproxy.google.com 44 Uber App Update To Track Driver Behavior The update is designed to capture data about how Uber's drivers operate their vehicles -- measuring braking, acceleration, and speed. 2016-07-02 10:06 4KB www.informationweek.com 45 Most organizations don't have an IT security expert With cyber attacks on the rise, organizations are facing pressure to beef up their security to avoid falling victim to such an attack. However, a recent IT security report from Spiceworks shows that 80 percent of organizations were affected by at least one security incident during 2015... 2016-07-02 07:53 2KB feeds.betanews.com 46 Symantec patches over twenty products after Google discovers zero-day flaws Google's Project Zero researcher, Tavis Ormandy, has uncovered a raft of critical vulnerabilities affecting the core engine found in Symantec and Norton branded security products. 2016-07-02 07:50 2KB feedproxy.google.com 47 Windows 10 Dominates PC Gaming, New Steam Data Shows June 2016 figures put Windows 10 on the first place 2016-07-02 07:09 2KB news.softpedia.com

48 Bioinformatics software developed to predict effect of cancer-associated mutations: Software analyzes 40,000 proteins per minute -- ScienceDaily A new piece of software has been developed that analyses mutations in proteins. These mutations are potential inducers of diseases, such as cancer. The development is free, easy, versatile and, above all, fast bioinformatics application that is capable of analyzing and combining the... 2016-07-02 13:36 4KB feeds.sciencedaily.com 49 Internet attacks: Finding better intrusion detection -- ScienceDaily The brute force and sheer scale of current Internet attacks put a heavy strain on classic methods of intrusion detection. Moreover, these methods aren't prepared for the rapidly growing number of connected devices: scalability is a major issue. Now a researcher proposes another way of... 2016-07-02 13:36 2KB feeds.sciencedaily.com Articles

49 articles, 2016-07-02 18:00

1 Microsoft Paid Too Much for LinkedIn Due to Bidding War with Salesforce, Google (1.02/2) And while Microsoft itself hasn’t commented on the price it has to pay to buy LinkedIn, it appears that the company was actually involved in a bidding war with other companies and this is how it ended up paying so much for the business social networking service. A filling to the US Security and Exchange Commission reveals that Microsoft wasn’t the only party interested in taking over LinkedIn, as Salesforce, Google, and Facebook also submitted takeover offers as part of negotiations that took place behind the closed doors. It appears that Microsoft and LinkedIn officials met for the first time in February 2016 to discuss takeover plans, but in March, there were also meetings with representatives from other interested parties, which are believed to be Google, Facebook, and Salesforce. Microsoft and Salesforce have reportedly been the two parties outbidding each other continuously, while Google and Facebook decided to leave negotiations after losing the bidding war with the two giants. Salesforce continued to fight against Microsoft in the race for LinkedIn, but Redmond eventually won after it agreed to pay $196 per share in cash only. Salesforce made an offer of $200 in stock and cash, but LinkedIn was looking for a cash-only deal. The first takeover offer that Microsoft made for LinkedIn was $160 per share, so the company ended up paying $196 per share because of the bidding war against Salesforce, Google, and Facebook. LinkedIn thus proved to be $5 billion more expensive than Microsoft expected, but Redmond decided to go for it anyway. Rumor has it that Microsoft is now seeking a loan in order to complete the takeover and avoid some taxes, as bringing cash from overseas into the United States would lead to other millions in taxes for the Redmond-based software giant. 2016-07-02 10:34 Bogdan Popa

2 Advanced Concepts of Java Object Serialization (0.03/2) Serialization literally refers to arranging something in a sequence. It is a process in Java where the state of an object is transformed into a stream of bits. The transformation maintains a sequence in accordance to the metadata supplied, such as a POJO. Perhaps, it is due to this transformation from an abstraction to a raw sequence of bits that it is referred to as serialization by etymology. This article takes up serialization and its related concepts and tries to delineate some of its nooks and crannies, along with their implementation in the Java API. Serialization makes any POJO persistable by converting it into a byte stream. The byte stream then can be stored in a file, memory, or a database. Figure 1: Converting to a byte stream Therefore, the key idea behind serialization is the concept of a byte stream. A byte stream in Java is an atomic collection of 0s and 1s in a predefined sequence. Atomic means that they are not further derivable. Raw bits are quite flexible and can be transmuted into anything: character, number, Java object, and so forth. Bits individually do not mean anything unless they are produced and consumed by the definition of some meaningful abstraction. In serialization, this meaning is derived from a pre-defined data structure called class and they are instantiated into an active entity called a Java object. The raw bit stream then is stored in a repository such as a file in the file system, array of bytes in the memory, or stored in the database. At a later time, this bit stream can be restored back into its original Java object in a reverse procedure. This reverse process is called deserialization. Figure 2: Serialization The object serialization and deserialization processes are designed to work recursively. That means, when any object serialized at the top of an inheritance hierarchy, the inherited objects gets serialized. The reference objects are located recursively and serialized. During the restoration, a reverse process is applied and the object is deserialized in a bottom-up fashion. An object to be serialized must implement a java.io. Serializable interface. This interface contains no members and is used to designate a class as serializable. As mentioned earlier, all inherited subclasses are also serialized by default. All the member variables of the designated class are persisted except the members declared as transient and static ; they are not persisted. In the following example, class A implements Serializable. Class B inherits class A; as a result, B is also serializable. Class B contains a reference to class C. Class C also must implement Serializable interface; otherwise, java.io. NotSerializableException will be thrown at runtime. In case you want to use a single object read to or write from a stream, use the readUnshared and writeUnshared methods instead of readObject and writeObject , respectively. Observe that any changes in the static and transient variables are not stored in the process. There are a number of problem with the serialization process. As we have seen, if a super class is declared serializable, all the sub classes also get serialized. This means, if A inherits B inherits C inherits D... All the objects would be serialized! One way to make fields of these classes non-serializable is to use the transient modifier. What if we have, say, 50 fields that we do not want to persist? We have to declare those 50 fields as transient! Similar problems can arise in the deserialization process. What if we want to deserialize only five fields rather than restore all 10 fields serialized and stored previously? There is a specific way to stop serialization in the case of inherited classes. The way out is to write your own readObject and writeObject method as follows. A serializable class recommends declaring a unique variable, called serialVersionUID , to identify the data persisted. If this optional variable is not supplied, JVM creates one by an internal logic. This is time consuming. Compile to create the class file: The output would be like what's shown in Figure 3. Figure 3: Results of the compiled class file In a nutshell, a serialization interface needs some change with better control in the serialization and deserialization process. An externalizable interface provided some improvement. But, bear in mind, the automatic implementation of a serialization process with the Serializable interface is fine in most cases. Externalizable is a complementary interface to allay many of its problems where better control over serialization/deserialization is sought. The process of serialization and deserialization is pretty straightforward and most of the intricacies to storing and restoring an object are handled automatically. Sometimes, is may happen that the programmer needs some control over persistence process; say, the object to be stored needs to be compressed or encrypted before storing, and similarly, decompression and decryption need to happen during the restoration process. This is where you need to implement the Externalizable interface. The Externalizable interface extends the Serializable interface and provides two member functions to override by the implemented classes. The readExternal method reads the byte stream from ObjectInput and writeStream writes the byte stream to ObjectOutput. ObjectInput and ObjectOutput are interfaces that extend the DataInput and DataOutput interface, respectively. The polymorphic read and write methods are called to serialize an object. Externalization makes the serialization and deserialization processes much more flexible and give you better control. But, there are a few points to remember when using Externalizable interface: According to the preceding properties, any non-static inner class is not externalizable. The reason is that the JVM modifies the constructor of the inner classes by adding a reference to the parent class at the time of compilation. As a result, the idea of having a no-argument constructor is simply inapplicable in case of non-static inner classes. Because we can control what field to persist and what not to with the help of the readExternal and writeExternal methods, making a field non-persistable with a transient modifier is also irrelevant. Serialization and Externalizable is a tagging interface to designate a class for persistence. The instances of these classes may be transformed and stored in byte stream storage. The storage may be a file on disk or database, or even transmitted across a network. The serialization process and Java I/O stream are inseparable. They work together to bring out the essence of object persistence. 2016-07-02 00:00 Manoj Debnath

3 Understanding Gradle, the Android Build System (0.01/2) There are various world-class build solutions available for developers. Ant and Maven come to mind for Java developers, but as any Android developer would know, the de facto buildset for Android development, Android Studio, is Gradle. Gradle is an easily customizable build system that supports building by a convention model. Gradle is written in Java, but the build language is Groovy DSL (domain spec language). Gradle not only supports multi-project builds, but it also supports dependencies like Ivy and Maven. Gradle also can support building non-Java projects. You have a few ways to get Gradle: Gradle has a build file, build.gradle. The build file contains tasks, plugins, and dependencies. A task is code that Gradle executes. Each task has a lifecycle and properties. Tasks have 'action,' which is code that is going to execute. Task actions are broken into two parts: 'first action' and 'last action.' Task also have dependencies—one task can depend on the other. This allows specifying an order in which tasks are executed. Gradle has the concept of build phases. There is an initialization phase that is used to configure multi-project builds. The configuration phase involves executing code in the task that is not the action. The execution phase involves actually executing the task. To declare a simple dependency, you can use Task.dependsOn. Sometimes, you might need a more explicit declaration, such as mustRunAfter, where a task can run only after another task has run. Likewise, there also is support for shouldRunAfter where the execution order is not forced. finalizedBy is also supported. Here is a sample Gradle build file for Android that shows that jcenter is used as the repository and a dependency on Gradle 1.5 library. // build.gradle (project) The module Gradle build file supports configuring build settings, like compileSdkVersion, buildToolsVersion, default configuration, build types, and dependencies. //build.gradle module In this article, you learned about Gradle, a popular build system that also is used for Android development. I hope you have found this information useful. Vipul Patel is a technology geek based in Seattle. He can be reached at [email protected]. You can visit his LinkedIn profile at https://www.linkedin.com/pub/vipul-patel/6/675/508 . 2016-07-02 00:00 Vipul Patel

4 4 Using the Executor Framework to Deal with Java Threads (0.01/2) Threads provide a multitasking ability to a process (process = program in execution). A program can have multiple threads; each of them provide a unit of control as one of its strands. Single threaded programs execute in a monotonous, predictable manner. But, a multi-threaded program brings out the essence of concurrency or simultaneous execution of program instruction where a subset of code executes or is supposed to execute in parallel mode. This mechanism leverages performance, especially because modern processing workhorses are multi core. So, running a single threaded process that may utilize only one CPU core is simply a waste of resources. Java core's APIs includes a framework called Executors Framework, which provides some relief to the programmer when working in a multi-threaded arena. This article mainly focuses on the framework and its uses with a little background idea to begin with. Parallel execution requires some hardware assistance, and a threaded program that brings out the essence of parallel processing is no exception. Multi-threaded programs can best utilize multiple CPU cores found in modern machines, resulting in manifold performance boost. But, the problem is that maximum utilization of multiple cores requires a program's code to be written with parallel logic from the ground up. Practically, this is easier said than done. In dealing with simultaneous operations where everything is seemingly multiple, problems and challenges are also multi- faceted. Some logics are meant to be parallel whereas some are very linear. The biggest problem is to balance between them yet keep up with maximal utilization of processing resources. Parallel logic is inherently parallel, whose implementation is pretty straightforward, but converting a semi-linear logic into an optimal parallel code can be a daunting task. For example, the solution of 2 + 2 = 4 is quite linear but the logic to solve expression such as (2 x 4) + (5 / 2) can be leveraged with parallel implementation. Parallel computing and concurrency, though closely related, are yet distinct. This article uses both words to mean same thing to keep it simple. Refer to https://en.wikipedia.org/wiki/Parallel_computing to get a more elaborate idea on this. There are many aspects to be considered before modeling a program for multi-threaded implementation. Some questions to ask while modeling one are: When creating a task (task = individual unit of work) , what we normally do is either implement an interface called Runnable or extend the Thread class: And, create the task as follows: And then execute each task as follows: To get a feedback from individual task, we have to write additional code. But, the point is that there are too many intricacies involved in managing a thread execution, such as creation and destruction of a thread, has a direct bearing on the overall time required to start another task. If it is not performed gracefully, unnecessary delay in the start of a task is certain. A thread consumes resources, so multiple threads may consume multiple resources. This has a propensity to slack overall CPU performance; worse, it can crash the system if the number of threads exceeds the permitted limit of the underlying platform. It also may happen that some thread consumes most of the resources leaving other threads starved, or a typical race condition. So, the complexity involved in managing thread execution is easily intelligible. The Executor Framework attempts to address this problem and bring some controlling attributes. The predominant aspect of this framework is to state a clear demarcation between the task submission from task execution. The executor says, create your task and submit it to me; I'll take care of the rest (execution details). The mechanics of this demarcation is attributed to the interface called Executor under the java.util.concurrent package. Rather than creating thread explicitly, the code above can be written as: and then Calling the executor method does not ensure that the thread execution is initiated; instead, it merely refers to a submission of a task. The executor takes up the responsibility on behalf, including the details about the policies to adhere to in the course of execution. The class library supplied by the executor framework determines the policy, which, however, is configurable. There are many static methods available with the Executors class (Note that Executor is an interface and Executors is a class. Both included in the package java.util.concurrent ). A few of the commonly used are as follows: All of these methods return an ExecutorService object. The ExecutorService interface extends Executor and provides necessary methods to manage execution of threads, such as the shutdown () method to initiate an orderly shutdown of threads. There is another interface, called ScheduledExecutorService , which extends ExecutorService to support scheduling of threads. Refer to Java Documentation for more details on these methods and other service details. Note that the use of executor is highly customizable and one can be written from scratch. Let's create a very simple program to understand the use of an executor. The Executor Framework is one of much assistance provided by the Java core APIs, especially in dealing with concurrent execution or creating a multi-threaded application. Some other assisting libraries useful for concurrent programming are explicit locks, synchronizer, atomic variables, and fork/join framework, apart from the executor framework. The separation of task submission from task execution is the greatest advantage of this framework. Developers can leverage this to reduce many of the complexities involved in executing multiple threads. 2016-07-02 00:00 Manoj Debnath

5 Twilio IPO May Be Key Indicator for Other Unicorns in 2016 NEWS ANALYSIS: A good response from investors June 23 could help determine whether companies such as Dropbox, Uber and others decide to test the waters this year. Information technology IPOs have been AWOL halfway through 2016. This has had analysts, investors and market watchers scratching their heads and wondering what the heck is going on. There certainly are plenty of quality companies—Uber, Airbnb and Dropbox, for merely three examples—rising up that could consider an initial public offering, no question about that. And there still is a ton of money being invested in new and relatively new companies every week. We at eWEEK who report on such venture capital movements know all about this. So why are IPOs not happening? There are two reasons: First, the markets currently are generally perceived to be too volatile (or hostile, as some people would put it)—especially as hair-trigger automatic trading on projections have become the norm. Second, larger companies are swallowing up smaller ones at such a breakneck pace that they don't have time to consider going public. Last year, $20 billion worth of tech companies went private, according to Bulger Partners, a mergers and acquisitions advisory firm. On the other side, tech IPOs raised a mere $21 billion. Bulger Partners reported a whopping $232 billion worth of M&A transaction value for 2015 alone. IPOs a Risky Proposition IPOs have to be successful on Day 1, that's a fact. If they are not, the walls often can come crashing in very quickly, and fledgling startups need to have nerves of vanadium to weather such a potential crisis. But let's put all of that aside for now, because there has been a development. Twilio Inc., a small but highly regarded startup whose cloud service enables developers to build and operate real-time communications within software applications, is making a breakthrough of sorts: It is going public June 23 at $15 per share. Twilio allows software developers to programmatically make and receive phone calls and send and receive text messages using its Web-service APIs. Twilio's services, which go a long way toward keeping bugs out of software—and are especially valuable in rapid iteration-type environments —are accessed over HTTP and billed based on usage. As of last year, more than 560,000 software developers were using Twilio in their daily production work. The San Francisco-based company raised more than it expected—about $150 million, or about $11 per share—in its initial private offering June 22. That's a good sign for the dozens of other so-called unicorns that have been valued at more than $1 billion through private fundraising. Twilio Will Start Trading on Nasdaq at $15 Twilio said June 22 that it will start trading June 23 on the NYSE at $15 a share, above the $12-to-$14 range the company had previously indicated. The June 22 investors at $11 no doubt are pleased with that declaration. The deal, which will be the first Silicon Valley tech IPO of the year, is a closely watched test case to determine whether the market will be receptive of future tech IPOs this year. A good response June 23 could help determine whether companies such as Dropbox, Uber and others decide to test the IPO waters themselves later this year. The offering of the San Francisco-based company comes as U. S.-listed IPOs are on track for their worst year in terms of numbers since the financial crisis year in 2008. 2016-07-02 13:36 Chris Preimesberger

6 6 Microsoft Streamlines Visual Studio Installation Microsoft is refactoring its Visual Studio installation to be smaller, faster, more reliable and easier to manage. As Microsoft moves to become all things to all developers, the company has undergone some growing pains in terms of making that happen via its core toolset, Visual Studio. The move to take its. NET platform cross-platform and to support all different kinds of development from the Visual Studio toolset has bloated the size of the tools. And now Microsoft is moving to provide developers with a streamlined acquisition experience for Visual Studio, based on the type of development they are involved in. At its Build 2016 conference, Microsoft delivered the first preview of the next version of Visual Studio and gave an early look at a lightweight acquisition experience with Visual Studio. "The challenges we are seeing with our customers is that as we pivoted to support any developers building for any applications on any platform, the application model matrix is really exploding," Julia Liuson , Microsoft's corporate vice president of Visual Studio, told eWEEK . If you just think about the mobile space alone, there's the Android software development kit (SDK), the Cordova tools, the different emulators and more that a developer can use, she said. The overall collection of tools, SDKs and emulators is a very large set. Combining that large tool set with customers who have a habit of simply checking the "Select All" box when installing a product can lead to some disgruntled customers. Indeed, according to Liuson, with customers who download the entire product on their machine, Microsoft frequently gets feedback about the size of the download and questions of why Visual Studio is now 40 gigabytes. That's one of the problems the company is tackling—how to provide customers with a far more optimized experience for the particular workload that they are working on. For instance, if developers just want to do Python programming, they don't really need all of the Visual Studio mobile tools or the cloud tools. If they're doing development, they don't necessarily need all of the cloud and server development offerings. "We're working on more workload-oriented acquisition experiences for our customers," Liuson said. "So when the product comes down to their machine, it's easily updateable and they can get the pieces they need easily. And what they decide not to use they can get rid of easily. " This is a key experience Microsoft is working on for the next release of Visual Studio, code-named Visual Studio 15. "We're hoping that with most of the users, the amount of stuff that they install to get started should be a lot smaller than what they do today," Liuson said. In a post on the Visual Studio Blog , Tim Sneath, principal lead program manager for the Visual Studio Platform at Microsoft, said based on feedback Microsoft got from developers at Build and from other research, Microsoft has come up with a list of 17 workloads the company is building for developers to choose from in next version of Visual Studio. Those workloads are: 1. Universal Windows Platform development 2. Web development (including ASP. NET, TypeScript and Azure tooling) 3. Windows desktop app development with C++ 4. Cross-platform mobile development with. NET (including Xamarin) 5.. NET desktop application development 6. Linux and Internet of things development with C++ 7. Cross-platform mobile development with Cordova 8. Mobile app development with C++ (including Android and iOS) 9. Office/SharePoint add-in development 10. Python Web development (including Django and Flask support) 11. Data science and analytical applications (including R, F# and Python) 12. Node.js development 13. Cross-platform game development (including Unity) 14. Native Windows game development (including DirectX) 15. Data storage and processing (including SQL, Hadoop and Azure ML) 16. Azure cloud services development and management 17. Visual Studio extension development "You can select one or more of these when setting up Visual Studio, and we’ll pull down and install just the relevant Visual Studio components that you need to be productive in that stack," Sneath said. Liuson noted that Microsoft is very sensitive to the fact that because it is making such a major change to a core part of its product experience, there will be a lot of feedback. And the company wants to hear customers' perspectives and address any concerns people might have. "Even though this is not a new product feature, it's such an important way for people to access all the features that we do offer," she said. "So this is actually a pretty important infrastructure change that the engineering team is working through. And it's a fairly big and disruptive change from an engineering angle. " Sneath's post goes on to inform developers on how they can install Visual Studio faster and leaner. He also provides details on how the new installer will work. 2016-07-02 13:36 Darryl K

7 Eclipse Foundation Ships Neon Release Train The Eclipse Foundation shipped its eleventh annual release train, featuring 84 projects and 69 million lines of code from nearly 800 developers. The Eclipse Foundation on June 22 announced the availability of its Neon release, the eleventh annual coordinated release train of open-source projects from the Eclipse community. The Neon release includes 84 Eclipse projects consisting of more than 69 million lines of code, with contributions by 779 developers, 331 of whom are Eclipse committers. Last year's release train, the Mars release, had 79 projects. "It takes a great amount of coordination and effort by many developers within our community to ship a release that is on-time," said Mike Milinkovich, executive director of the Eclipse Foundation, in a statement. Ian Skerrett, vice president of marketing at the Eclipse Foundation, said one of the key focus areas for the Neon release was improving Eclipse's JavaScript development tooling. The foundation upgraded the JavaScript integrated development environment (IDE) in the Eclipse platform known as JavaScript Development Tools, or JSDT. "There's been a lot of work on improving the usability and performance of our JavaScript tooling, including support for the latest version of JavaScript," Skerrett said. "That team did a lot of work on the whole JavaScript tool chain and we have integration with JavaScript build systems like Grunt and Gulp that JavaScript developers use. We have integration with the Chromium V8 debugger so you can have a tight compile and debug cycle. We also improved our support for Node.js development to make it easier to build and debug Node.js applications. " In addition, Eclipse JSDT 2.0 includes new tools for JavaScript developers, including a JSON editor along with the support for Grunt/Gulp and a new Chromium V8 Debugger. The Neon release also features an updated PHP Development Tools Package (PDT). The new Eclipse PDT 4.0 release for PHP developers provides support for PHP 7 and improved performance. Another key area of focus was improving the lot of Java developers on the Eclipse platform, Skerrett said. In the core Eclipse platform and the Java Development Tools project, the foundation added HiDPI support, which supports advanced monitors with graphics cards in them. That support is on Mac, Windows and Linux. There are also updates to JDP, such as auto-save, automatically saving things as developers type into the IDE. And there are improvements to JDT's Content Assist so that when developers are using it they can highlight search fields that they put in, as Content Assist now highlights matched characters and provides substring completion. Other improvements and additions include updates to Automated Error Reporting. The Eclipse Automated Error Reporting client can now be integrated into any third-party Eclipse plug-in or stand-alone Rich Client Platform (RCP) application. The Neon release also features improved support for Docker Tooling and introduces the Eclipse User Storage Service (USS). The Eclipse USS is a new storage service that enables projects to store and retrieve user data and preferences from Eclipse servers creating a better user experience (UX) for developers. "Neon noticeably returns focus to essential coding improvements, like editor auto-save, HiDPI support, better dark theme and more intelligent Java Content Assist," said Todd Williams, vice president of Technology at Genuitec , a founding member of the Eclipse Foundation that offers tools supporting the Eclipse platform such as MyEclipse and Webclipse. "These changes, along with Neon's increased responsiveness, will help ensure that Eclipse remains competitive in its core market segments. " 2016-07-02 13:36 Darryl K

8 Chan Zuckerberg Initiative Selects Andela for First Major Investment Andela, a company that pairs developers in Africa with opportunity in the U. S., has been selected as the first major investment of the Chan Zuckerberg Initiative. "Brilliance is evenly distributed, but opportunity is not. " That's the founding principle behind Andela, a 2-year-old startup that's bringing together brilliant developers in Africa with opportunities in America—and that today announced it's the first major funding recipient of the Chan Zuckerberg Initiative. CZI, founded by Facebook founder Mark Zuckerberg and his wife, Pricilla Chan, led Andela's Series B funding with an investment of $24 million. "The round represents a huge vote of confidence from some of the most respected names in technology," Andela CEO and co-founder Jeremy Johnson wrote in a June 16 letter to investors and advisers, shared on the Andela blog. "Not only is it a vote for Andela, but it's also a recognition of the caliber of software developers and human beings that make up the Andela Fellowship. " Johnson also welcomed investor GV, formerly Google Ventures, to the Andela family. Zuckerberg acknowledged the investment in a post on his Facebook page. "I was lucky to be born in a wealthy country where I had access to computers and the internet. If I had been born somewhere else, I'm not sure I would have been able to start Facebook—or at least it would have taken a lot longer and been more difficult," he wrote. Zuckerberg added that the talent-opportunity gap is among the most dramatic in Africa, where six out of every 10 Africans are younger than 35, and in some places more than half of them are without work. "Priscilla and I believe in supporting innovative models of learning wherever they are around the world—and what Andela is doing is pretty amazing," Zuckerberg added. Andela has offices in Nairobi, Kenya, and Lagos, Nigeria, where it employs close to 200 engineers. Its four-year Fellows program is highly selective—to date, it has accepted less than 1 percent of the candidates from the more than 40,000 applications it has received. Once selected, Fellows receive 1,000 hours of training over six months and then are paired with a U. S. company in need of development help. Andela educates the Fellow about the Andela customer company's culture and needs and then flies the Fellow to that company's headquarters for two weeks, to build trust with the team members and strategize a roadmap. After that, the U. S. team and the Andela developer communicate online daily. "They're working in your time zone, communicating in your Slack channels and participating in your daily stand-ups," explains the Andela site, emphasizing its goal of providing as friction-free a service as possible. The U. S. company gets a great developer; a brilliant person, with fewer local opportunities, gets a great job; and the hiring process is less stressful and time-consuming for the hiring company, as Andela does all the screening and interviewing on its end, it states. To date, Andela clients include Microsoft, IBM and Udacity. Diversity and Success Diversity is a proven contributor to success, as is the inclusion of women in working groups and particularly in leadership positions . Andela has a goal that 35 percent of its software development team should be women, Christina Sass, one of Andela's four co-founders, told CNN Money , adding that it has been "very disciplined" in that effort. In the spring, it hosted an all-female boot camp in Kenya and made an effort to communicate to women's families that Andela is a safe place to work. Ultimately, 1,000 women applied, 41 were selected for the boot camp and nine were accepted into Andela, according to CNN. As part of Andela's vision to train 100,000 world-class developers over the next 10 years, on June 14 it announced three- and six-month internship programs in Lagos for creative thinkers, excellent problem solvers and people willing to "become the CEO" of their own work. Benefits, it noted, include breakfast and lunch, a passionate working environment and, an opportunity to work with some of the brightest minds on the planet. "Oh," it added, "and a chance to change the world! " 2016-07-02 13:36 Michelle Maisto

9 Google Seeks to Spur Kids' Interest in Coding With Project Bloks A Google research project seeks to build on years of theory and research in the area of tangible programming to interest children in programming at an early age. Google, in collaboration with design firm IDEO and a researcher from Stanford University, are collaborating on an effort dubbed Project Bloks that is designed to get kids started on programming at a very young age. The project is inspired by previous and long-standing academic work and research in the area of so-called tangible programming—in which children learn basic programming concepts by manipulating physical objects like wooden blocks. One early example of such work is Tern , a tangible programming language developed several years ago by a graduate student at Tufts University that gave children a way to build basic programs by connecting a set of interlocking blocks together. Each of the blocks represented a specific programming instruction like 'start', 'stop', 'turn' or 'move left,' which when put together created a set of basic instructions for a robot to follow. Google's Project Bloks seeks to build on such research by creating what it described on its Research Blog as an open hardware platform that will give designers, developers, educators and others a way to build "physical coding experiences" for children. As a first step in this direction, the company has built a working prototype of a system for tangible programming consisting of three components—a "Brain Board," "Base Boards" and programmable "Pucks". Google's pucks function like the blocks in Tern. Each puck can be programmed with a different function and be placed on the Base Board, which then reads the instruction or instructions on the puck via a capacitive sensor, the company said. Multiple Base Boards can be connected together in different configurations to create various programs. When the Brain Board is attached to the connected Base Boards it reads the instructions contained in each board and sends it via Bluetooth or WiFi to connected devices such as robots or toys, which then execute the instructions. "As a whole, the Project Bloks system can take on different form factors and be made out of different materials," Steve Vranakis and Jayme Goldstein, two members of Google's Creative Lab said in the Research Blog . For instance, a puck can be devised out of nothing but a sheet of paper and some conductive ink, according to the two Google researchers. "This means developers have the flexibility to create diverse experiences that can help kids develop computational thinking—from composing music by using simple functions to playing around with sensors or anything else they care to invent," they said. Working with IDEO, Google has developed a Coding Kit, which is a sort of proof-of-concept system for developers to use as a reference. Project Bloks is one of two initiatives that Google announced this week pertaining to children and education. The other is a partnership with digital education company TES Global. Under the effort, Google for Education has set up a new portal on the tes.com Website that will let teachers learn how to use Google Expeditions' virtual reality tours in the classroom. The arrangement with TES will give teachers a way to more easily find and share lessons that are compatible with Google Apps for Education and access free training on Google tools, TES said in a statement. 2016-07-02 13:36 Jaikumar Vijayan

10 N Is For Nougat AKA Android 7.0 Google has revealed that the next version of Android will be named Nougat - which may cause some problems due to variations in its pronunciation - and will be version 7, with a new statue added to its growing collection at the Googleplex. If you want to know more about Google's tradition of naming successive versions of Android after sweet confectionery, and erecting a new statue each time, see the Behind the Scenes video from Nat and Lo which included in our report on Andriod M being Marshmallow. The name has been decided on after asking attendees at Google I/O and a poll conducted on the Google Opinon Rewards app which allowed Android users to have their say. Despite many rumors that "Nutella" might be a suitable choice it wasn't even on the short list - and looking at the question again it does look as though Nougat was the obvious choice if you add the constraint that it is required to be sweet! The video Google posted yesterday on You Tube also puts an end to speculation of is version number by including: On June 30th, 2016 we unwrapped our latest treat, Android 7.0 Nougat. So the only questions remaining are when will it be released and how long before it is widely adopted. We have already had four developer preview, the latest of which includes the final SDK with all the APIs in their finalized form which enables apps to be published for Nougat. This just leaves the 5th and final developer preview, billed as "near-final" for final testing is expected this month. Google has mention a Q3 launch - which means any time before the end of September. As to when it will arrive with end users, that's a more difficult question. This year's new Nexus phones will be the first to run Nougat but for those who don't want new handsets it might take longer. For example, after Marshmallow launched on September 29, 2015 it took Samsung until March 3, 2016 to push out its first update to the Galaxy Note 5. Even now Marshmallow only accounts for 10% of Android usage and Lollipop, currently with 35% has only recently replaced KitKat as the most popular version. Let's hope that things proceed in a speedier fashion for Android 7 as it has new features that make it a worthwhile upgrade, see Android N Developer Preview Is Out to be reminded about them. So how do you pronounce Nougat? Well it's originally French so perhaps "nooogah"; but in English it sounds rather more gutteral according to this You Tube clip: To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on, Twitter, Facebook, Google+ or Linkedin. 2016-07-02 08:33 Written by

11 Windows 10 Anniversary Update Slated For Aug. 2 Mark your calendars, Windows users. Microsoft has confirmed the Windows 10 Anniversary Update is slated for a public rollout on Aug. 2. Windows 10 was officially released on July 29, 2015. Almost one year later, Microsoft reports over 350 million devices running Windows 10 -- an increase of 50 million since the last device count in May 2016. Customer engagement is also high, with users spending more than 135 billion hours on the OS. To celebrate the one-year mark, Redmond is releasing one of the biggest updates to arrive on Windows 10 since its public rollout. The Anniversary Update includes new features for businesses and consumers. [More on Windows 10: Microsoft paid out $10,000 for a forced OS upgrade .] Security is a priority in the coming update. Two major security features arriving on Aug. 2 are Windows Defender and Windows Hello for apps and websites in Microsoft's efforts to eliminate the password. Biometric authentication system Windows Hello can be used to log in to apps and websites within Microsoft Edge. As part of the Anniversary Update, Windows users can also use Windows Hello to unlock their PCs using companion devices. For individual users, improvements to Windows Defender will include an option to automatically schedule quick, regular PC scans and receive alerts and summaries if threats are detected. Enterprise customers will receive Windows Defender Advanced Threat Protection , which is designed to detect, investigate, and respond to advanced threats. Businesses will be protected from accidental data leaks with Windows Information Protection, which lets corporations separate personal and business information to better protect sensitive data. Cortana, which first arrived on the desktop in Windows 10, will now be available above the lock screen so you can set reminders or play music without unlocking it. Cortana will also save and recall important information like frequent flier numbers, and give notifications across all devices where it is present. Windows Hello isn't the only improvement arriving in Microsoft Edge. The browser will come with power-saving upgrades like using less memory and fewer CPU cycles, and lessening the affects of background activity. Microsoft has already touted the lasting power of Edge , and this indicates it's doing more to preserve users' battery life. Edge will also be updated with extensions including the Pinterest "Pin It" Button, Amazon Assistant, LastPass, AdBlock, and AdBlock Plus in the Windows Store. It'll also have a new accessibility architecture to support modern web standards like HTML5, CSS3, and ARIA. Microsoft is also working to improve digital pen capabilities with Windows Ink, a central hub for using the pen in Windows 10. You'll be able to use Windows Ink to take notes, draw, or sketch on screenshots. Smart Sticky Notes help you remember tasks and suggest directions. Some of the core apps in Windows 10 have been updated to include features to support inking. You can handwrite notes in Office or Edge, or draw custom routes on the Maps app. Windows 10 has been available as a free upgrade since it launched last summer, but the clock is ticking for anyone still running older versions of Windows. The Anniversary Update will be released a few days after Microsoft stops offering Windows 10 for free to current users of Windows 7, Windows 8, and Windows 8.1. If you want the upcoming features at no cost, be sure to upgrade to Windows 10 before July 29. 2016-07-02 14:05 Kelly Sheridan

12 Hortonworks Commits To Microsoft's Azure Cloud Hadoop distributor Hortonworks used its Hadoop Summit in San Jose this week to get a little closer to one of its top cloud technology partners -- Microsoft. The big data company announced that HDInsight is its Premier partner for Connected Data Platforms -- Hortonworks Data Platform for data at rest and Hortonworks DataFlow for data in motion. "Azure HDInsight as our Premier Connected Data Platforms cloud solution gives customers flexibility to future proof their architecture as more workloads move to the cloud," Hortonworks CEO Rob Bearden wrote in a prepared statement released June 28. The closer partnership with Microsoft was one of several announcements from Hortonworks during the Hadoop Summit this week. The company also updated its Hortonworks Data Platform package with features for enterprise customers, introduced a new precision medicine consortium to explore a next-generation open source platform for genomics research, and struck a partnership with AtScale to advance business intelligence on Hadoop. [Another Hadoop distributor, MapR, also recently released an update. Read MapR Spyglass Initiative Eases Big Data Management.] Hortonworks Data Platform (HDP) 2.5 is the newest version. The company says it offers enterprise-ready features , including an integration of comprehensive security and trusted data governance that both leverage Apache Atlas and Apache Ranger. The company has also included a host of other open source big data technologies to make the package an enterprise-grade experience. The platform now also offers the web-based data science notebook, Apache Zeppelin, for interactive data analytics and the creation of interactive documents with SQL, Scala, Python, and other tools. The inclusion of the most recent version of Apache Ambari gives enterprises support for planning, installing, and securely configuring HDP, and for performing ongoing maintenance and management of the systems. Also, a new role-based access control model now lets administrators provide different users with different functional access to the cluster. To improve developer productivity, the company has added Apache Phoenix Query Server to enable more choices for development languages to access data stored within HBase. Apache Storm now allows for large- scale deployments for real-time stream processing. The new version also includes new connectors for search and NoSQL databases, according to Hortonworks. Hortonworks also announced a new partnership with AtScale , offering that startup's technology for enabling SQL-type queries against data resident in Hadoop. "From day one, our goal has been to make BI and Hadoop work in harmony by erasing the friction associated with moving data and forcing end users to learn new BI tools," wrote AtScale CEO Dave Mariani in a prepared statement. AtScale's technology will be available via Hortonworks in the third quarter, the companies said. Hortonworks also announced its own plan to participate in the precision medicine space with the formation of a new consortium "to define and develop and open source genomics platform to accelerate genomics based precision medicine in research and clinical care. " In addition to Hortonworks, initial members of this consortium include Arizona State University, Baylor College of Medicine, Booz Allen Hamilton, Mayo Clinic, OneOme, and Yale New Haven Health. Hortonworks said that this consortium will take on the task of defining the requirements and addressing the limitations of current technology for storing massive volumes of genomic information, analyzing it, and querying it at scale in real time. Hortonworks noted the consortium will apply " Design Thinking " to this problem. "Unleashing the power of data through open community and collaboration is the right approach to solve a complex problem like precision medicine," DJ Patil, chief data scientist, White House Office of Science and Technology Policy, wrote in a prepared statement. "Initiatives like this one will break data silos and share data in an open platform across industries to speed genomics-based research and ultimately save lives. " 2016-07-02 11:05 Jessica Davis

13 10 essential free business apps, tools and services for SMBs and startups Running a small business can be stressful, as you try to deal with all the competing demands on your time. That's quite normal – it's difficult when you have to do almost everything yourself – but fortunately there are plenty of free apps and services which can make your life easier. Do you need to get more organised, for instance? Keep important information in one place? Manage projects, track how you're spending your time, communicate more effectively with others, or automate your invoicing? We've found ten essential free services to help you with all these tasks, and many more. Managing information is a key part of running any small business, and Evernote is the ideal service to help. Grab a web image, an article, URL, make a note, a to-do list and more, and it's instantly uploaded and synced across all your devices. Browse through this data later on and you're able to organise and search it, maybe share the details with others and take part in group discussions. Overall, Evernote is a versatile package and a convenient way to keep track of just about anything. If you need more, the Plus account (£19.99 per year, which is around $30, AU$40) lifts the monthly upload limit from 60MB to 1GB, also adding saving and offline access, while Evernote Premium includes email searching, PDF annotations, scanning and digitising business cards, and more. Every business needs good productivity software, but that doesn't have to mean spending big money on Office 365. Google Docs' document editor, spreadsheet, presentation tool and survey builder are easy to use, with plenty of templates and guidance to help you get started, yet are still more than powerful enough for most tasks. The core office suite is just the start. You can extend the package with some quality add-ons, many of them free. You're able to access your data from any browser on any device. Others in your business can do the same, which is ideal when you're collaborating on a project, and because it's all hosted in the cloud, you don't even have to spend time and money on backups. Skype's classic app is a convenient way to send instant messages, and make free voice or video calls to any other user, whether they're on Windows, Macs, phones or tablets. This isn't just about personal chats. You're able to send files, maybe share your entire screen if you need an instant opinion on something, or set up an instant audio conference for groups of up to 25 people. The service has its share of technical glitches, and if you check the Google or Apple stores you'll find some very mixed reviews. But the response is still broadly positive, it's worked well for us in the past, and upgrading to Skype for Business gets you extras like Outlook integration and group meetings of up to 250 people. The easiest way to communicate within a business is often to send , or maybe instant messages, but there's little structure to this. Key information is held in different places, making it difficult to track. Slack offers a smarter, cloud-based solution. Sign up as many users as you need, then create 'channels' for specific purposes: individual projects, customers, whatever makes sense to you. Channels can be private or public, and used to send messages, share images, documents, spreadsheets and more. Everything you add is indexed right away, becomes searchable, and is immediately synced across desktop and mobile devices. Slack's free account has some limitations – there are no group calls allowed (two-person only) and only 5GB total storage – but if you need a serious collaboration tool, it's still a solid choice. Most CRMs are complex, and they take time to set up and learn, but Streak is different. It runs inside Gmail, giving the system easy access to the contacts, emails and files you have already. In a click or two you can group or view related emails, add and track customer status, notes and more. 'Boxes' help you define where you are in sales or other pipelines, and it's easy to keep everyone in the business up- to-date on your progress. Bonus features include some handy Gmail power tools, including Snoozing, Send Later, Merge, Templates, Thread Splitting, and 200 tracked emails per month. Streak's free plan has strict limits on the amount of data you can share with other users in your business, but if it's mostly for you, the service could work very well. Running a small business can be challenging, and will probably soak up every spare minute you have, so it's important to ensure you're working as efficiently as possible. And that's where toggl comes in. This easy-to-use time tracker records the websites and applications you're using on just about any device, detects idle time, allows tasks to be added manually, and produces detailed reports on demand. It's a simple way to find out how you're spending your time, but that's not all. Use toggl across your business (up to five people in the free version) and a web dashboard enables tracking your employees and seeing exactly what they've been doing at any time. Toggl's commercial plans support unlimited users, billable rates, time estimates, enhanced reporting and more, but the price is high (from £6/$9/AU$12 per user per month) and the free account will be enough for many. Trello is a web-based project manager which has something to offer absolutely everyone, from individual home users to the most demanding of businesses. The service uses 'boards' to hold lists containing whatever information you need: notes, checklists, due dates, file attachments and more. It's a very visual approach, and just building a board will help organise your ideas. Next, invite other people to the board, maybe assign them some tasks, and they can add comments, notes, files, maybe a few lists of their own. Your board syncs with other devices whenever something changes, so you're always up-to-date, and notifications and powerful search tools help you find whatever you need. Trello's free account does have some limits, in particular a 10MB maximum for file attachments, but overall it's still a very usable, high quality service. Wave offers simple, straightforward invoicing tools, free of charge, yet with more features than some of the competition. The program has no annoying limits on the number of invoices or customers it supports, for instance. You can set up recurring invoices for regular transactions, use multiple currencies and set any sales tax. The package is also able to track late payments, create receipts, estimates, quotes and more. There's even support for accepting credit card payments right away, without any setup fee or other hassles. It's not quite the bargain it seems because the commission you pay per transaction is higher than some commercial products, but if you'll only be taking very few payments then Wave still looks like a good deal. is a superb "to-do list" service, ideal for managing anything from a simple shopping trip to a complicated business project. You could start by adding a few simple text notes, then extend your list with web content, or by forwarding emails, and add reminders or due dates to make sure nothing gets missed. Share lists with others, add notes and comments, organise related lists into folders and everyone will know exactly what has to be done. And because Wunderlist syncs across all your devices – PC, Android, iOS, Mac, Windows Phone, Chromebook and the web – you'll hear about completed tasks almost as soon as they're finished. You probably won't agree with everything we've included in this slideshow. Maybe some of our choices don't quite hit the mark, or perhaps there's an area we haven't covered. Whatever it is, take a look at Zoho – there's a good chance you'll find something to help. The site offers website creation, for instance, along with contact management – and a CRM. There's also a project manager, virtual meeting tools, a productivity suite, invoice and accounting software, visual reporting apps, support and helpdesk tools, and the list goes on. There are free accounts for just about everything, too. Some are relatively limited – the Project Manager supports only one active project with 10MB storage – but others are very capable. For instance, the invoicing package supports invoicing up to 25 customers and has more features than some commercial products. Browse the site for a while and you're sure to find something you can use. 2016-07-02 08:30 By Mike

14 10 Hot Smartphones To Consider Now Choosing the best smartphone can be a difficult proposition. You may already have particular platform preference , which limits your options. Or perhaps you favor a particular service provider based on reception in your area. Whatever you choose, there will be something more tempting soon enough. The smartphone product cycle ensures that. At the moment, Apple and Google are worth watching, both for what they have planned and for their responses to what other hardware makers like Samsung and Lenovo have introduced already this year. Apple's iPhone 7 , due this fall, has people worried because reports suggest it won't have a analog headphone audio port. Instead, it is expected to include a Lightning port that can handle charging, data, and digital audio. The reason this is worrisome is that digital audio can be subject to technological controls like DRM , unlike analog audio. It also likely means that third-party vendors will have to apply to Apple's MFi program, which involves rules and fees, to create hardware like headphones that work with the phone's proprietary Lightning port. There may be benefits to iPhone 7 customers in the form of reduced phone size or more room for other components like the battery. But the cost appears to be reduced freedom to create peripherals that connect to the analog audio port, reduced peripheral choice, and increased peripheral cost, to offset MFi licensing fees. Apple reportedly reduced those fees in 2014, but those fees still figure into hardware makers margins and prices. The iPhone 7 also concerns investors because in April Apple reported a decline in revenue and iPhone sales, after years of uninterrupted growth. IDC's explanation for this was that the changes from the iPhone 6 to the 6S were insufficient to drive upgrades. At least that's part of the story. In any event, there's pressure on Apple to add new features that really make the iPhone 7 desirable and unique. Unfortunately for Apple, one of the most obvious possible features, water resistance, is already available in Samsung's Galaxy S7 and S7 Edge. Apple, which sued Samsung years ago for copying the iPhone, now appears to be copying Samsung. While there's lots of idea borrowing among tech companies, that's not the sort of market perception Apple wants. [See Mobile App Development: 8 Best Practices .] Google meanwhile has promised to deliver developer versions of its Project Ara modular phone this fall, with general availability planned for next year. Project Ara has been scaled back a bit -- the CPU, display, and RAM won't be removable -- but it still has potential to change the dynamics of the smartphone market. Other handset makers like LG are already experimenting with limited modularity. If Project Ara succeeds, smartphones may become a bit more open and more conducive to third-party participation from peripheral makers. But Google has to demonstrate that Project Ara phones won't just be bigger and more expensive than smartphone designs that don't contemplate expansion or modification. While we wait, here are nine great smartphones you can pick up today, and one to look forward to in a few months. Take a look and let us know what you think in the comments below. Would you consider these models? Did we miss your favorite smartphone? 2016-07-02 07:06 Thomas Claburn

15 Microsoft Finally Rolls Out Double Tap to Wake for Lumia 950 and 950 XL Both the Lumia 950 and the 950 XL are lacking support for Double Tap to Wake, a feature that allows you to wake the device by simply double- tapping the screen. This makes it super easy to wake the phone because you don’t have to press the physical lock button every time. And after Microsoft confirmed earlier this year that it’s working to add Double Tap to Wake on the Lumia 950 and 950 XL, it turns out that this feature is now ready and shipping to users as part of a new firmware update. There’s talk on Reddit that a new firmware version for Lumia 950 and 950 XL is bringing this feature to devices, but it appears that not everyone is getting it for the moment. After installing the new firmware update, users can find a new option in the Touch settings screen that lets them enable an option which “wakes up the phone when I double tap on the screen.” By default, this option is off, so you need to manually enable it. Microsoft confirmed that it’s working on such a feature earlier this year, but no release date was provided at that time. “Thank you for your feedback! We are aware of this issue. We are working on adding the support for double tap to wake for devices that support this feature,” a company engineer said in the Feedback Hub after many requests and votes submitted by buyers of the new Windows 10 Mobile device. For the moment there’s still no information on when exactly this new firmware could arrive for all users, but you can check to see if you get it with the Windows Device Recovery Tool right now. 2016-07-02 06:37 Bogdan Popa

16 16 PayPal Announces New Windows Phone App, Says It Won’t Happen Minutes After That Although PayPal has clearly stated that it wanted to remove its Windows Phone app, a message displayed to some users indicated that the company was working on a new app, possibly with Windows 10 support. “Good news is on its way – we’re hard at work creating a brand new PayPal app for you. We’ll let you know as soon as it’s ready. Meanwhile, you can access your account, send money and more by visiting paypal.co.uk,” the message revealed . And obviously, this was only good news for everyone with Windows phones, especially because PayPal is one of the essential apps for so many users. But it looks like in the end, that message wasn’t supposed to show up in the PayPal app on Windows phones, and the company quickly removed it and published one that only causes more frustration. “Sorry about that! Our app for Windows is no longer available. You can access your account, send money, and more using our mobile website,” the new message reads. Furthermore, there are reports coming from users who claim that updating to the latest version of the PayPal app leads to various problems, including unexpected crashes. It’s not yet clear whether PayPal is indeed working on a Windows Phone app and the message that appeared earlier today was just a teaser or the company is indeed abandoning the platform completely, so there’s no doubt that all these things are only causing more confusion in the Windows community. We’ve contacted PayPal to see if there are any plans for a new Windows Phone, and maybe Windows 10, app, but we don’t expect anything more than a “no comment” statement. Either way, we’ll update the article when we hear back from them. 2016-07-02 06:18 Bogdan Popa

17 Microsoft Talks Ubuntu on Windows 10, Offers Video Tutorial The Linux Subsystem for Windows is already available in beta version for users running preview builds of Windows 10, and because it’s such an important feature for the Anniversary Update, a Microsoft employee has decided to post a detailed video tutorial on how to set it up and use it on your PC. Microsoft’s Scott Hanselman takes viewers through a step-by-step video guide to successfully configure the Bash in Windows 10 and use it for basic stuff. But as he points out, the Bash can be used for more advanced things too, so it’s up to developers to unleash the full potential of this new feature. As Hanselman puts it, this clip is “a 20 min video screencast showing what you need to do to enable and some cool stuff that just scratches the surface of this new feature. Personally, I love that I can develop with Rails on Windows and it actually works and isn't a second class citizen. If you're a developer of any kind this opens up a whole world where you can develop for Windows and Linux without compromise and without the weight of a VM.” Certainly, these how-to videos come in handy for those who are trying to make the most of the Windows 10 Anniversary Update and they certainly land at the right time given the fact that the debut of the new OS is projected to take place in approximately one month. The Windows 10 Anniversary Update will bring a plethora of other changes, including Edge extension support, an overhaul of the Start menu, Cortana, and the Action Center, and many other things that will improve your experience with the operating system. 2016-07-02 05:58 Bogdan Popa

18 Slackware Linux 14.2 Officially Released with Linux Kernel 4.4, without systemd Slackware Linux 14.2 arrives two and a half months after the mid-April release of the second and last Release Candidate (RC) build, and it has now been declared stable and ready for deployment as your daily driver. Powered by the latest (at the moment of writing this article) long-term supported Linux 4.4.14 kernel, Slackware 14.2 ships with many up-to-date components and GNU/Linux technologies. For example, the distro switched to GNU C Library 2.23, X. Org 7.7, GGC 5.3.0 as default compiler, LLVM/Clang support, Apache 2.4.20, PHP 5.6.23, Perl 5.22.2, Python 2.7.11, Ruby 2.2.5, Subversion 1.9.4, Git 2.9.0, Mercurial 3.8.2, KDE Software Compilation 4.14.21 desktop environment, and much more. Best of all, there's support for Linux kernel 4.6 in the /testing directory. "We are sure you'll enjoy the many improvements. We've done our best to bring the latest technology to Slackware while still maintaining the stability and security that you have come to expect. Slackware is well known for its simplicity and the fact that we try to bring software to you in the condition that the authors intended," said Patrick J. Volkerding in today's announcement. As we can see from the release notes, there's no sign of the next- generation systemd init system in Slackware 14.2, and it looks like the operating system is using a combination of eudev, udisks, and udisks2, as well as several of freedesktop.org's specifications to allow system administrators grant use of different hardware devices like MP3 players, USB disks, CD-ROM/DVD-ROM units, and much more. UEFI (Unified Extensible Firmware Interface) support is available on the 64- bit (x86_64) version of Slackware 14.2, the latest NetworkManager is available offering users support for a broad range of network connection, including IPv6, VPN, mobile broadband, wired and wireless, and support for fully encrypted network connections thanks to technologies like OpenVPN, OpenSSH, OpenSSL, and GnuPG. Last but not least, there are a host of open-source applications, among which we can mention SeaMonkey 2.40, Mozilla Firefox ESR 45.2.0, 45.1.1, Pidgin 2.10.12, GIMP 2.8.16, HexChat 2.12.1, XSane 0.999, Pan 0.139, GKrellM 2.3.7, and lots others in the /extra directory. Download Slackware 14.2 for 32-bit and 64-bit computers right now via our website. 2016-07-02 05:47 Marius Nestor

19 Four Things Your Business Does That Seems Outdated to Programmers Good software developers are difficult to find, with so much competition for professionals who have the latest skills. Businesses often pay top dollar to lure in the best talent, only to find they leave soon after. In fact, IT professions in general have some of the highest turnover rates of all trades, often because employees have the luxury of easily moving to a new employer for more pay or better workinng conditions. "We're finding that a lot of in-demand tech talent are often choosing to freelance in order to take advantage of a variety of improved quality of life options," says Rishon Blumberg, Founder of 10x Management, which bills itself as the first tech talent agency. "The companies that are having the best luck attracting and retaining tech talent are increasingly offering many of these same quality of life options to their W2 employees. In addition to adapting your company to work with agile talent, the best way to keep and attract talent is to offer the flexibility that the market is demanding. " Whether your business hires salaried programmers or relies on freelancers, you likely feel challenged to attract and retain skilled programmers. In actuality, it could be that they see some of your processes as outdated. Here are a few things your business may be doing to scare innovative developers away. Telecommuting is on the rise, with more people working from home each year. However, there are still organizations that prefer to have employees on site, where they can be available for meetings and monitored by supervisors. Software development requires hours of focus, with distractions serving as a productivity drain. Employers who take a strict stance against telecommuting risk losing developers to the many businesses that now allow remote work. From the time you post your job ad, you may find that some of the top programmers skip it once they see that you won't allow telecommuting, giving your competitors the edge in hiring the best developers. Even if you prefer to have employees on site at least some of the time, consider giving employees who can work from home, such as application developers, the freedom to at least telecommute two to three days per week. When surveyed, professionals across all industries make no secret of the fact that they hate meetings. With so many collaboration tools now available, that weekly meeting to check in does nothing more than waste everyone's time. Social media-style collaboration tools can make project updates more fun, letting everyone check in with an update on a daily, weekly, or bi-monthly basis. Your employees won't be forced to sit around a table, listening to what Frank from accounting is working on this week, and you'll have a written record of everyone's responses that you can refer to when needed. You'll also avoid excluding freelancers, who often aren't included in regular meetings. When a professional has an in-demand skillset, even one year without a pay increase can be enough incentive to start a job search. Organizations that have set-in-stone pay standards may scare salaried and contract programmers away, especially if they only offer small increases every year or two. Developers may actually be contacted by recruiters with offers of higher pay. For salaried workers, it's important that supervisors conduct regular evaluations and note any certifications or advanced skills a developer has picked up over the course of the year. If your pay is lower than market average and you can't afford higher salaries, consider bringing in freelancers whom you can work with on designated projects. Developers design applications that automate processes. When your organization relies on outdated processes like paper timesheets or faxed documents, developers may question your business's technological integrity. Invest in processes that eliminate paper and improve productivity, such as automated HR tools and document-signing software. Not only will these solutions improve productivity, they'll demonstrate to your employees, contractors, and clients that your business is forward thinking enough to embrace the latest technology in everything you do. If one of your developers mentions an easier way to accomplish something, listen to the suggestion and consider putting it to use. Often, your IT team members will be able to save your business time and money by recommending ways you can automate. A business's development team is one of its most valuable assets, helping create Web sites and applications that connect with customers and make employees' lives easier. It's important to invest in up-to-date processes to attract innovative employees and keep them, showing them an innovative culture that will help them grow and thrive. 2016-07-02 00:00 Drew Hendricks

20 A Deeper Look: Java Thread Example The concept of thread is intriguing as we dive deeper from different perspective of its construct apart from the gross idea of multitasking. The Java API is rich and provides many features to deal with multitasking with threads. It is a vast and complex topic. This article is an attempt to engross the reader in some concepts that would aid in better understanding Java threads, eventually leading to better programming. A program in execution is called a process. It is an activity that contains a unique identifier called the Process ID, a set of instructions, a program counter—also called instruction pointer—handles to resources, address space, and many other things. A program counter keeps track of the current instruction in execution and automatically advances to the next instruction at the end of current instruction execution. Multitasking is the ability of execute more than one task/process at a single instance of time. It definitely helps to have multiple CPUs to execute multiple tasks all at once. But, in a single CPU environment, multitasking is achieved with the help of context switching. Context switching is the technique where CPU time is shared across all running processes and processor allocation is switched in a time bound fashion. To schedule a process to allocate the CPU, a running process is interrupted to a halt and its state is saved. The process that has been waiting or saved earlier for its CPU turn is restored to gain its processing time from the CPU. This gives an illusion that the CPU is executing multiple tasks, while in fact a part of the instruction is executed from multiple processes in a round robin fashion. However, the fact is that true multiprocessing is never possible, even with multiple CPUs, not because of the machine limitation but because of our limitation to handle true multiple processing effects. Parallel execution of 2/200 instruction does not make a machine multiprocessor; rather, it extends or limits its capability to a cardinal precision. Exact multiprocessing is beyond humane scope and can be harnessed only by the essence of it. There is a problem with independent execution of multiple processes. Each of them carries a load of a non-sharable copy of resources. This can be easily shared across multiple running processes, yet they are not allowed to do so because processes usually do not share address spaces with another process. If they must, they can communicate only via some of the inter-process communication facilities such as sockets or pipes, and so forth. This poses several problems in process communication and resource sharing, apart from making the process what is commonly called heavy- weight. Modern Operating Systems solved this problem by creating multiple units of execution within a process that can share and communicate across its execution unit. Each of these single units of execution is called a thread. Every process has at least one thread and can create multiple threads, only bounded by the operating system's limit of allowed shared resources, which usually is quite large. Unlike a process, a thread has only a couple of concerns: Program Counter and a Stack. A thread within a process shares all its resources, including the address space. A thread, however, can maintain a private memory area called Thread Local Storage, which is not shared even with threads originating from the same process. The illusion of multi-threading is established with the help of context switching. Unlike context switching with the processes, context switch between threads is less expensive because thread communication and resource sharing is easier. Programs can be split into multiple threads and executed concurrently. A modern machine with a multi- core CPU further can leverage the performance with threads that may be scheduled on a different processor to improve overall performance of program execution. A thread is associated with two types of memory: main memory and working memory. Working memory is very personal to a thread and is non- sharable; main memory, on the other hand, is shared with other threads. It is through this main memory that the threads actually communicate. However, every thread also has its own stack to store local variables, like the pocket where you keep quick money to meet your immediate expenses. Because each thread has its own working memory that includes processor cache and register values, it is up to the Java Memory Model (JMM) to maintain the accuracy of the shared values across multiple threads that may be accessed by two or more competing threads. In multi-threading, one update operation to a shared variable in the memory area can leave it in an inconsistent state unless coordinated in such a way that some other thread must get an accurate value even in some random read/write operation on the shared variable. JMM ensures reliability with various housekeeping tasks, some of them are as follows: Atomicity guarantees that a read and write operation on any field is executed indivisibly. Now, what does that mean? According to the Java Language Specification (JLS), int, char, byte, float, short, and boolean operations are atomic but double; long operations are not atomic. Here's an example: Because it is internal, it involves two separate operations: one that writes first 32 bits and the second writes last the 32 bits, to assign a 64 bit value. Now, what if we are running a 64 bit Java? The Java Language Specification (JLS) reference provides the following explanation: "Some implementations may find it convenient to divide a single write action on a 64-bit long or double value into two write actions on adjacent 32-bit values. For efficiency's sake, this behaviour is implementation- specific; an implementation of the Java Virtual Machine is free to perform writes to long and double values atomically or in two parts. Implementations of the Java Virtual Machine are encouraged to avoid splitting 64-bit values where possible. Programmers are encouraged to declare shared 64-bit values as volatile or synchronize their programs correctly to avoid possible complications. " This specifically is a problem when multiple threads read or update a shared variable. One thread may update the first 32-bit value and before updating the last 32-bit, another thread may pick up the immediate value, resulting in an unreliable and inconsistent read operation. This is the problem dealing with instructions that are not atomic. However, there is a way out from long and double variables. Declare it as volatile. Volatile variables are always written into and read from main memory. They are never cached. That is the reason it is as follows: Or, synchronize getter/setter: Or, use AtomicLong from java.util.concurrent.atomic package, as shown here: Synchronization between thread communications is another issue that can be quite messy unless handled carefully. Java, however, provides multiple ways to establish communication between threads. Synchronization is one of the most basic mechanisms among them. It uses monitors to ensure that shared variable access is mutually exclusive. Any competing thread must go through lock/unlock procedures to get an access. On entering a synchronized block, the values of all variables in the working memory are reloaded from the main memory and writes back as soon as it leaves the block. This ensures that, once the thread is done with the variable, it leaves it in the memory so that some other thread can access it soon after the first thread is done. There are two types of threads synchronizations built into Java: A critical section in a code is designated with reference to an object's monitor. A thread must acquire the object's monitor before executing the critical section of code. To achieve this, a synchronized keyword can be used in two ways: Either declare a method as a critical section. For example, Or, create a critical section block. For example, JVM handles the responsibility of acquiring and releasing an object monitor's lock. The use of a synchronized keyword simply designates a block or method to be critical. Before entering the designated block, a thread first acquires the monitor lock of the object and releases it as soon as its job is done. There is no limit on how many times a thread can acquire an object monitor's lock, but must release it for another thread to acquire the same object's monitor lock. This article tried to give a perspective of what Java thread means in one of its many aspects, yet a very rudimentary explanation omitting many details. Thread in Java programming construction is very deeply associated with Java Memory Model, especially, on how its implementation is handled by JVM behind the scene. Perhaps the most valuable literature to understand the idea is to go through the Java Language Specification and Java Virtual Machine Specification. They are available in both HTML and PDF format. Interested readers may go through them to get a more elaborate idea. 2016-07-02 00:00 Manoj Debnath

21 Introducing ASP. NET Core Dependency Injection If you developed professional Web applications using ASP. NET MVC, you are probably familiar with Dependency Injection. Dependency Injection (DI) is a technique to develop loosely coupled software systems. ASP. NET MVC didn't include any inbuilt DI framework and developers had to resort to some external DI framework. Luckily, ASP. NET Core 1.0 introduces a DI container that can simplify your work. This article introduces you to the DI features of ASP. NET Core 1.0 so that you can quickly use them in your applications. To understand how Dependency Injection works in ASP. NET Core 1.0, you will build a simple application. So, begin by creating a new ASP. NET Core 1.0 Web application by using an Empty project template. Figure 1: Opening the new template Then, open the Project.json file and add dependencies as shown below: (You can get Project.json from this code download .) Make sure to restore packages by right-clicking the References folder and selecting Restore Packages from the shortcut menu. Then, create a DIClasses folder under the project root folder. Add an interface named IServiceType to the DIClasses folder. A type that is to be injected is called a service type. The IServiceType interface will be implemented by the service type you create later. The IServiceType interface is shown below: The IServiceType interface contains a single method—GetGuid(). As the name suggests, an implementation of this method is supposed to return a GUID to the caller. In a realistic case, you can have any application-specific methods here. Then, add a MyServiceType class to the Core folder and implement IServiceType in it. The MyServiceType class is shown below: The MyServiceType class implements an IServiceType interface. The class declares a private variable—guid—that holds a GUID. The constructor generates a new GUID using the Guid structure and assigns it to the guid private variable. The GetGuid() method simply returns the GUID to the caller. So, every object instance of MyServiceType will have its own unique GUID. This GUID will be used to understand the working of the DI framework as you will see later. Now, open the Startup.cs file and modify it as shown below: Notice the line shown in bold letters. This is how you register a service type with the ASP. NET Core DI container. The AddScoped() method is a generic method and you mention the interface on which the service type is based (IServiceType) and a concrete type (MyServiceType) whose object instance is to be injected. A type injected with AddScoped() has a lifetime of the current request. That means each request gets a new object of MyServiceType to work with. Let's test this by injecting MyServiceType into a controller. Proceed by adding HomeController and Index view to the respective folders. Then, modify the HomeController as shown below: The constructor of the HomeController accepts a parameter of IServiceType. This parameter will be injected by the DI framework for you. Remember that, for the DI to work as expected, a type must be registered with the DI container (as discussed earlier). The IServiceType injected by the DI framework is stored in a private variable—obj—for later use. The Index() action calls the GetGuid() method on MyServiceType object and stores the GUID in ViewBag's Guid property. The Index view simply outputs this GUID as shown below: Now, run the application and you should see something like this: Figure 2: Viewing the GUID Refresh the browser window a few times to simulate multiple requests. You will observe that a new GUID is displayed every time. This confirms the working of AddScoped() as discussed previously. There are two more methods that can be used to control the lifetime of the injected object—AddTransient() and AddSingleton(). A service registered using AddTransient() behaves such that every request for an object gets a new object instance. So, if a single HTTP request requests a service type twice, two separate object instances will be injected. A service registered using AddSingleton() behaves such that all the requests to a service are served by a single object instance. Let's test these two methods, one by one. Modify Startup.cs as shown below: In this case, you used the AddTransient() method to register the service type. Now, modify the HomeController like this: This time, the HomeController has two parameters of IServiceType. This is done just to simulate two requests to the same service type. The GUIDs returned by both the object instances are stored in the ViewBag. If you output the GUIDs on the Index view, you will see this: Figure 3: Viewing the GUIDs on the Index view As you can see, the GUIDs are different within a single HTTP request, indicating that different object instances are getting injected into the controller. If you refresh the browser window, you will get different GUIDs each time. Now, modify Startup.cs and use AddScoped() again to register the type. Run the application again. Did you notice the difference? Now, both the constructor parameters point to the same object instance, as confirmed by the GUIDs. Now, change Startup.cs to use the AddSingleton() method: Also, make corresponding changes to the HomeController (it will now have just one parameter) and the Index view. If you run the application and refresh the browser as before, you will observe that for all the requests the same GUID is displayed, confirming the singleton mode. 2016-07-02 00:00 Bipin Joshi

22 What Is Jenkins? If you never heard about Jenkins , or it is just something that you didn't understand exactly what is it useful for, this article is for you. In the next few minutes, we will have an overview of Jenkins meant to introduce you this comprehensive tool dedicated to automating any kind of project. Basically, Jenkins is an open source project written in Java and dedicated to sustaining continuous integration practices ( CI ). The tasks that Jenkins can solve are related to project automation, or, more exactly, Jenkins is fully able to automate build, test, and integrate our projects. For example, in this article you will see how to chain GitHub->Jenkins->Payara Server to obtain a simple CI environment for a Hello World Spring-based application (don't worry, you don't need to know Spring). So, let's delve a little in the Jenkins goals. We begin with the installation of Jenkins 2, continue with major settings/configurations, install specific plug- ins, and finish with a quick start example of automating a Java Web application. In this article, we will assume the following: To download Jenkins, simply access the official Jenkins Web site (https://jenkins.io/) and press the button labeled Download Jenkins , as seen in Figure 1: Figure 1: Download Jenkins We go for the weekly release, which is listed oin the right side. Simply expand the menu button from Figure 1 and choose the distribution compatible with your system (OS) and needs. For example, we will choose to install Jenkins under Windows 7 (64-bit), as you can see from Figure 2: Figure 2: Select a distribution compatible with the system Notice that, even if the name is 2.5.war , for Windows we will download a specific installer. After download, you should obtain a ZIP archive named jenkins-2.5.zip. Simply un-zip this archive in a convenient location on your computer. You should see a MSI file named jenkins.msi. So, double-click this file to proceed with the very simple installation steps. Basically, the installation should go pretty smoothly and should be quite intuitive; we installed Jenkins in the D:\jenkins 2.5 folder. At the end, Jenkins will be automatically configured as a Windows service and will be listed in the Services application, as in Figure 3: Figure 3: Jenkins as a Windows service Beside setting Jenkins as a service, you will notice that the default browser was automatically started, as shown in Figure 4: Figure 4: Unlock Jenkins Well, this is the self-explanatory login page of Jenkins, so simply act accordingly to unlock Jenkins. In our case, the initialAdminPassword was 9d9f510d8ef043e98f7c574b3ea8adc0. Don't bother about typing this password; simply use copy-paste. After you click the Continue button, you can see the page from Figure 5: Figure 5: Install Jenkins plug-ins Because we are using Jenkins for the first time, we prefer to go with the default set of plug-ins. Later on, we can install more plug-ins, so you don't have to worry that you didn't install a specific plug-in at this step. Notice that installing suggested plug-ins may take a while, depending on your Internet connection (network latency), so be patient and wait for Jenkins to finish this job for you. While this job is in progress, you should see a verbose monitoring that reveals the progress status, plug-ins names, and the dependencies downloaded for those plug-ins. See Figure 6: Figure 6: Monitoring plug-ins installation progress You can use this time to spot some commonly used plug-ins, such as Git, Gradle, Pipeline, Ant, and so forth. After this job is done, it is time to set an admin user of Jenkins. You need to have at least an admin, so fill up the requested information accordingly (Figure 7): Figure 7: Create the first Jenkins admin If you press Continue as admin , Jenkins will automatically log you in with these credentials and you will see the Jenkins dashboard. If you press the Save and Finish button, you will not be logged in automatically and you will see the page from Figure 8: Figure 8: Start using Jenkins If you choose Save and Finish (or whenever you are not logged in), you will be prompted to log in via a simple form, as in Figure 9: Figure 9: Log in to Jenkins as admin After login, you should see the Jenkins dashboard, as in Figure 10: Figure 10: Jenkins dashboard So far, you have successfully downloaded, installed, and started Jenkins. Let's go farther and see several useful and common configurations. To work as expected, Jenkins needs a home directory and implicitly some disk space. In Windows (on a 64-bit machine), by default, the Jenkins home directory ( JENKINS_HOME ) is the place where you have installed Jenkins. In our case, this is D:\jenkins 2.5. If you take a quick look into this folder, you will notice several sub-folders and files, such as the /jobs folder, used for storing jobs configurations; a /plugins folder, used for storing installed plug- ins or the jenkins.xml file containing some Jenkins configurations. So, in this folder, you will store Jenkins stores plug-ins, jobs, workspace, users, and so on. Now, let's suppose that we want to modify the Jenkins folder from D:\jenkins 2.5 in C:\JenkinsData. To accomplish this task, we need to follow several steps: By default, Jenkins will start on port 8080. In case that you are using this port for another application (for example, application servers as Payara, Wildfly, and the like), you will want to manually set another port for Jenkins. This can be accomplished by following these steps: By default, Jenkins will use 256MB, as you can see in jenkins.xml. To allocate more memory, simply adjust the corresponding argument. For example, let's give ir 8192MB: You also may want to adjust the perm zone or other JVM memory characteristics by adding more arguments: Please find more Jenkins parameters here. Winstone is part of Jenkins; therefore, you can take advantage of settings such as --handlerCountStartup (set the number of worker threads to spawn at startup; default, 5) or --handlerCountMax (set the max number of worker threads to allow; default,d 300). Remember that when we have installed Jenkins we chosen the default set of plug-ins. Moreover, remember that we said that Jenkins allows us to install more plug-ins later from the dashboard. Well, it is time to see how to deal with Jenkins plug-ins. To see what plug-ins are installed in your Jenkins instance, simply select the Manage Jenkins | Administration plugins | Installed tab. See Figure 15: Figure 15: See the installed plug-ins Installing a new plug-in is pretty simple. Select the Manage Jenkins | Administration plugins | Available tab. Locate the desired plug-in(s) (notice that Jenkins will provide a huge list of plug-ins, so you better use the search filter feature), tick the desired plug-in(s), and click one of the available options listed at the bottom of the page. Jenkins will do the rest for you. See Figure 16: Figure 16: Install a new plug-in For example, later in this article we will need to instruct Jenkins to deploy the application WAR on a Payara Server. To accomplish this, we can install a plug-in named Deploy Plugin. So, in the Available tab, we have used the filter feature and typed deploy. This will bring us, on screen, the plug-in as in Figure 17. (If you don't use the filter, you will have to manually search through hundreds of available plug-ins, which will be time-consuming.) Therefore, simply tick it and install it without restart: Figure 17: Install Deploy plug-in After installation, this plug-in will be listed under the Installed tab. Before defining a job for Jenkins, it is a good practice to take a look to the global tool configuration ( Manage Jenkins | Global Tool Configuration ). Depending on what types of jobs you want to run, Jenkins needs to know where to find additional tools, such as JDK, Git, Gradle, Ant, Maven, and so forth. Each of these tools can be installed automatically by Jenkins once you tick the Install automatically checkbox. For example, in Figure 18, you can see that Jenkins will install Maven automatically: Figure 18: Install Maven under Jenkins But, if you already have Maven installed locally, you can un-tick the Install automatically checkbox and instruct Jenkins where to find Maven locally via the MAVEN_HOME environment variable. Either way, you have to specify a name to this Maven installation. For example, type Maven as the name and keep this in mind because you will need it later. Each tool can be installed automatically, or you can simply instruct Jenkins where to find it locally via environment variables (for JDK, JAVA_HOME ; for Git, GIT_HOME ; for Gradle, GRADLE_HOME ; for Ant, ANT_HOME , and for Maven, MAVEN_HOME ). Moreover, each tool needs a name used to identify it and refer it later when you start defining jobs. This is useful when you have multiple installation of the same tool. In case that a required variable is not available, Jenkins will show this via an error message. For example, let's say that we decided to instruct Jenkins to use the local Git distribution. But, we don't have GIT_HOME set, so here it is what Jenkins will report: Figure 19: Install Git under Jenkins This means that we need to set GIT_HOME accordingly or choose the Install automatically option. Once you set GIT_HOME , the error will disappear. So, before assigning jobs to Jenkins, take your time and ensure that you have successfully accomplished global tool configuration. This is a very important aspect! Because this is the first Jenkins job, we will keep it very simple. Practically, what we will do is to implement a simple CI project for a Hello World Spring application. This application is available here. Don't worry if you don't know Spring; it is not mandatory! Furthermore, you have to link the repository (you can fork to this repository) to your favorite IDE (for example, NetBeans, Eclipse, and so on) in such a way that you can easily push changes to GitHub. How you can accomplish this is beyond this article's goal, but if you choose NetBeans, you can find the instructions here. So, we are supposing that you have Jenkins installed/configured and the application is opened in your favorite IDE and linked to GitHub. The next thing to do is to install Payara Server with its default settings and start it. By default, it should start on port 8080 with admin capabilities on port 4848. Our next goal is to obtain the following automation: at each three-minute interval, Jenkins will take the code from GitHub, compile it, and the resulted WAR will be deployed on Payara Server. Open Jenkins in a browser and click New Item or Create a new job , as in Figure 20: Figure 20: Create a new job in Jenkins As you will see, there are several types of jobs (projects) available. We will choose the most popular one, which is freestyle project and we will name it HelloSpring : Figure 21: Select a job type and name it After you press the Ok button, Jenkins will open the configuration panel for this type of job. First, we will provide a simple description of the project, as in Figure 22 (this is optional): Figure 22: Describe your new job Because this is a project hosted on GitHub, we need to inform Jenkins about its location. For this, on the General tab, tick the GitHub project checkbox and provide the project URL (without the tree/master or tree/branch part): Figure 23: Set the project URL The next step consists of configuring the Git repository that contains our application in the Source Code Management tab. This means that we have to tick the Git checkbox and specify the repository URL, the credentials used for access, and the branches to build, as in Figure 24: Figure 24: Configure Git repository Further, let's focus on the Build Triggers tab. As you can see, Jenkins provides several options for choosing the moment when the application should be built. Most probably, you will want to choose the Build when a change is pushed to GitHub option, but for this we need to have a Jenkins instance visible on the Internet. This is needed for GitHub, which will use a webhook to inform Jenkins whenever a new commit is available. You also may go for the Poll SCM option, which will periodically check for changes before triggering any build. Only when changes to the previous version are detected, the build will be triggered. But, for now, we go for the Build periodically option, which will build the project periodically without checking for changes. We set this cron service to run at every three minutes: Figure 25: Build project periodically The schedule can be configured based on the instructions provided by Jenkins if you press the little question mark icon listed in the right of the Schedule section. By the way, don't hesitate to use those question marks whenever they are available because they provide really useful information. To build the project, Jenkins need to know how to do it. Our application is a simple Maven Web application and pom.xml is in the root of the application. So, on the Build tab, select the Invoke top-level Maven targets option from the Add build step drop-box. Furthermore, instruct Jenkins about the Maven distribution (remember that we have configured a Maven instance under the name Maven earlier in the Global Tool Configuration section) and about the goals you want to be executed (for example, clean and package ): Figure 26: Configure Maven distribution and goals So far, so good! Finally, if the application is successfully built, we want to delegate Jenkins to deploy it on Payara Server (remember that we have installed the Deploy Plugin earlier, especially for this task). This is a post- build action that can be configured on the Post-Build Actions tab. From the Add post-build action drop-box, select the Deploy war/ear to a container item. Figure 27: Add a post-build action This will open a dedicated wizard where we have to configure at least the Payara Server location and the credentials for accessing it: Figure 28: Configure Payara Server for deployment Click the Save button and everything is done. Jenkins will report the new job on the dashboard: Figure 29: The job was set Now, you can try to fire a manual build or wait for the cron to run the build every three minutes. For a manual build, simply click the project name from Figure 29, and on Build Now , as in Figure 30: Figure 30: Running a build now Each build is listed under the Build History section and can be in one of three stages: in progress, success, or failure. You easily can identify the status of your builds: Figure 31: Build status Most probably, if the build failed, you want to see what just happened. For this, simply click the specific build and afterwards on Console Output , as in Figure 32: Figure 32: Check build output All you have to do is to provide access for writing in the C:\Windows\Temp folder via the Properties wizard: Figure 34: Providing access for writing to the folder If the build is successfully accomplished, the application is deployed on Payara Server and it is available on the Applications tab of the admin console. From there, you easily can launch it, as in Figure 35: Figure 35: Run the application It looks like our small project works like a charm! Further, do some modifications in the application, push it on GitHub, wait for the Jenkins cron to run, and notice how the modifications are reflected in your browser after refresh. Well, to further the project, you can try to add more functionalities, like a JIRA account, GitHub webhook, and the like. 2016-07-02 00:00 Leonard Anghel

23 Cross-field Validation in JSF You have to be aware that, from a Java /JSF perspective, there are several limitations in using Bean Validation: JSR 303. One of them involves the fact the JSF cannot validate the class or method level constraints (so called, cross-field validation), only field constrains. Another one consists of the fact the allows validation control on a per-form or a per-request basis, not on a per- UICommand or UIInput. In order to achieve more control, you have to be open to write boilerplate code, and to shape custom solutions that work only on specific scenarios. In this article, we will have a brief overview of three approaches for achieving cross-field validation using JSF core and external libraries. We will pass through the approaches provided by: Let's suppose that we have a simple form that contains two input fields representing the name and the e-mail of a Web site member or admin. Next to these inputs, we have two buttons, one with the label Contact Member and another one with the label Contact Admin. When the user clicks the first button, he will "contact" the specified Web site member, and when he clicks on the second button, he will "contact" the specified Web site admin. The form is as follows: For a Web site member/admin, the name input should not violate any of the constraints defined in a group named MemberContactValidationGroup. Moreover, for a Web site member/admin, the email input should not violate any of the constrains defined in the AdminContactValidationGroup group. Even more, we have a constraint over email in the default group (applicable to members and admins). Next, we should attach these constraints to the name and email inputs, but, we need to obtain the following functionality: Finding a solution based on will end up in some boilerplate code, because it will require a "bunch" of tags, EL expressions, conditions, server-side code, and so forth. Most likely, at the end, our form will look like a total mess. Another approach is to redesign the application, and use two forms, one for members and one for admins. Further, let's suppose that the provided email should always start with the name ( getEmail().startsWith(getName() ). This is basically a cross-field constraint that can be applied via a class level constraint. But, JSF doesn't support this kind of constraints, so you have to provide another solution (not related to Bean Validation), like placing the validation condition in the action method, or in the getters (if there is no action method). Multiple components can be validated by using with postValidate , or, if you need to keep the validation in Process Validations phase, maybe you want to use and a JSF custom validator. The features brought by OmniFaces via the tag are exactly what we need to solve our use case. Although the standard only allows validation control on a per-form or a per-request basis, allows us to control bean validation on a per- UICommand or UIInput component basis. For example, we can obtain the claimed functionality via , like this: Listing 1: The complete application in named ValidateBean_1. Now, let's discuss the class level validation. The does not provide anything related to bean validation, so we can "jump" directly to. Right from the start, you should know that supports an attribute named method , which indicates if this is a copy bean validation (default) or an actual bean validation : In case of using copy bean validation, OmniFaces tries a suite of strategies for determining the copy mechanism. By default, OmniFaces comes with an interface ( Copier ) that is to be implement by classes that know how to copy an object, and provides four implementations (strategies) of it: Besides these four implementations (strategies), OmniFaces comes with another one, named MultiStrategyCopier , which basically defines the order of applying the above copy strategies: CloneCopier , SerializationCopier , CopyCtorCopier , NewInstanceCopier. When one of these strategies obtains the desired copy, the process stops. If you already know the strategy that should be used (or, you have your own Copier strategy (for example, a partial object copy strategy), you can explicitly specify it via copier attribute (for example, copier="org.omnifaces.util.copier. CopyCtorCopier" ). In OmniFaces Showcase, you can see an example that uses a custom copier. Moreover, you can try to find out more details about Copier on the OmniFaces Utilities ZEEF page, OmniFaces Articles block, and Copy Objects via OmniFaces Copier API article. Now, let's focus on our cross-field validation: ( getEmail().startsWith(getName() ). To obtain a class level constraint based on this condition, we need to follow several steps: 1. Wrap this constraint in a custom Bean Validation validator (for example, ContactValidator ). 2. Define a proper annotation for it (for example, ValidContact , used as @ValidContact ). 3. Annotate the desired bean (optionally, add it in a group(s)). 4. Use to indicate the bean to be validated via value attribute ( javax.el. ValueExpression that must evaluate to java.lang. Object ), and the corresponding groups (this is optional). Additionally, you can specify the actual bean validation, via the method attribute, and a Copier , via the copier attribute: Listing 2: The complete application in named ValidateBean_2. As you probably know, PrimeFaces comes with a very useful support for client side validation based on JSF validation API and Bean Validation. In this post, we will focus on Bean Validation, and say that this can be successfully used as long as we don't need cross-field validation or class level validation. This means that the validation constraints placed at the class level will not be recognized by PrimeFaces client side validation. In this post, you can see a pretty custom solution, but pretty fast to implement to obtain a cross-field client side validation for Bean Validation using PrimeFaces. We have a user contact made of a name and an e-mail, and our validation constraint is of type: e-mail must start with name (for example, [email protected] ): To accomplish this task, we will slightly adapt the PrimeFaces custom client side validation . First, we create a ValidContact annotation: Further, in our bean we annotate the proper fields ( name and email ) with this annotation—we need to do this to indicate the fields that enter in cross- field validation; so, annotate each such field: Now, we write the validator. Here, we need to keep the name until the validator gets the e-mail also. For this, we can use the faces context attributes, as below: Now, we have to accomplish the client-side validation. Again, notice that we store the name into an array (you can add more fields here) and wait for the e-mail: Finally, we ensure the presence of a ClientValidationConstraint implementation: Done! The complete application is named PFValidateBeanCrossField. JSF 2.3 will come with a new tag, named. As its name suggests, this tag enables class level validation. This tag contains two important attributes (but, more will be added): This feature causes a temporary copy of the bean referenced by the value attribute. Here is a brief example to ensure that the provided name and e-mail fields (contacts) are individually valid and also the e-mail start with that name (for example, valid: nick , [email protected] ). The complete application is named JSF23ValidateWholeBeanExample (I've tested with Mojarra 2.3.0-m04 under Payara 4). You can download the sample code from here. 2016-07-02 00:00 Anghel Leonard

24 Top 10 Reasons to Get Started with React. JS By Andrew Allbright React is a popular framework used by most large enterprise ventures and by small lone developers to create views with complicated relationships in a modular fashion. It provides just enough structure to allow for flexibility yet enough railing to avoid common pitfalls when creating applications for the Web. In the style of a top 10 list, I will describe reasons why you should choose this framework for your next project. One of the reasons why React became so popular was due to its video game-inspired rendering system. The basics of its system is around minimizing DOM interactions by batching updates, using a virtual memory DOM to calculate differences, and immutable state. One thing to note is that this approach was counter to other the trends of other JavaScript frameworks at the time. Angular 1, Ember, Knockout, and even jQuery were concerned with data binding to elements on the page. However, it turns out that dirty checking two-way data bindings produces exponentially more calculations as you add more elements into the than one way. Angular 2 has since abandoned dirty checking and two-way bindings for a more React-like approach. The short list of lifecycle methods make this framework one of the easiest to understand. In fact, it wouldn't be unheard of to become proficient with this entire library in under a day. This can be attributed to the "always rerender" nature of each view and how it accommodates state or property changes to its view. To emphasize this point, look at what all you need to define a simple React component... Your render function lends itself to terser, more immutable, functional programming that has become trendy in the JavaScript community with ES2015, ES2016. It may seem obvious today, but when React. JS was initially introduced into the JavaScript world at the time the idea of tightly coupling your view definition with the logic that controls, it was controversial. React released into a paradigm where client-side copies of traditional MVC frameworks, like those found on the server-side, were very popular. MVC traditional separates the HTML from controllers whose responsibilities were to combine multiple views and marshal data into them. That literally means these "concerns" were separated into their own files. The architects of React took another approach; they say the separation of HTML from JavaScript is superficial. Indeed, your HTML and JS application code were very tightly coupled, and keeping them in their own separation files was more a separation of technologies than separation of concerns. Imagine trying to change class names or id tags of HTML elements in a large jQuery application. You would have to verify that none of your DOM bindings were destroyed, suggesting a close relationship between the two. That's where JSX comes into the mix. By putting your component logic within the same file as the view it is operating on, it makes the module easier to reason about and the best part is you can leverage vanilla JavaScript to express your view. React is a library that defines your view but gives you lifecycle "hooks" to make server-side requests. This is an advantage because it means once you understand how XHR requests are made, you can more easily update what library you use to make these than, say, BackBoneJS. These hooks are state , props , componentWillMount , and componentDidMount (if you want to wait until late in the game). How you organize multiple different XHR interactions is largely up to you. Common patterns include the one I've just described, or Redux. Although React is curated by the developers at Facebook, it is very much a community-driven library. Viewing the GitHub issue trackers and PR, you get a sense that the developers deputizing themselves to maintain this framework find a joy in sharing code and getting into sometimes heated debate. This is an advantage for your project because you can ensure you will get code that has been vetted by passionate developers. In fact, communities trends inspire the architects as much as Facebook inspire the community. Redux has all but taken over Flux as a collection of libraries to create larger scale applications and this was created by someone for a conference demo. Facebook haS since embraced it as one of the best options for developers to get started with. This is not a unique attribute for most JavaScript frameworks, but React is one of the more popular libraries that is written in pure JavaScript. Plus, it's always fun to see who has been recognized when Facebook puts up its release notes. Large companies like Facebook, Netflix, and Walmart have embraced React as their library of choice for handling view related tasks. This vote of confidence is no accident. React has a neat feature where it can detect whether or not it needs to initially render the DOM onto the page. That means if you precompiled the view in your server-side code before delivering to the client's browser, React would be able to simply bootstrap its listeners and go from there. React provides the means to generate HTML from its syntax easily. This was intentional to gain favor with SEO bots, which traditionally don't run JavaScript in their crawlers (or at least mark those sites worse than pregenerated ones). Compared to other frameworks, React's 43.2 KB is a good size for what you get. For comparision: Angular 2's minified size is 125 KB, Ember is 113 KB, although Knockout 3.4.0 is 21.9 KB and jQuery 3.0 is 29.8 KB. React's ecosystem is vast indeed. The way the framework has been moving is towards separating view logic from "purer" business rules. By default, you adopt this strategy. This allows you to target other platforms, such as mobile, Virtual Reality devices, TV experiences, or even to generate email. The reason you should choose React for your next project is due to its lifecycle methods, state, and props that provide just enough railing to create scalable applications but not enough to stifle liberal use of different libraries. Need XHR data? Use componentWillMount. Need to make a particular component look pretty using a well-known jQuery library? Well, use componentDidMount with componentShouldUpdate or componentDidUpdate to stop DOM manipulations or restyle the element after changes easily. The point is there is just enough railing that correspond to natural component life cycles within the page to make a great deal of sense to developers of any experience level but not enough to where there is a "React" way of doing things. It is very versatile in that way. Now that you've read this list, I hope I've inspired you to find a React boilerplate repo and get started on a new project. React is fun to work with and, as I've laid out, there are so many reasons why you should choose this framework over others. 2016-07-02 00:00 Andrew Allbright

25 Stream Operations Supported by the Java Streams API Stream APIs is one of the most sophisticated implementations in Java. Stream APIs are mainly used in association with the collection framework. Sometimes, confusion arises in its usage due to such association. This is primarily because it inherently resembles the collection data structure and can be better understood when compared with it. If a stream is a collection of data elements, the generic collection is also a data structure that acts as a container for containment. They are complimentary and often used interchangeably in Java code and, for the sake of understanding, we'll compare and contrast them head on. This article takes on the concept of streams from a comparative perspective to illustrate some of its usage in regular Java programming. A stream basically is a sequence of data elements that support sequential and parallel aggregate operations. Aggregate operations are like computing the sum of integer elements in a stream or mapping a stream according to their string length, and so forth. A stream supports two types of operation with reference to the way they pull data elements from the data source; one is called lazy or terminal operation and the other is called eager or intermediate operation. Lazy pulling means data elements are not fetched until required, and the strategy of the stream is, by default, lazy. Terminal operations are particularly suitable for working with huge streams of data because they are minimal on memory usage. Eager or intermediate operations are best for performance but uses a lot of memory. For faster response, data elements must be made available in memory. As result, the limitation of its extensiveness is restricted due to the overuse of memory. So, it may be fast but unsuitable for operating on huge sequence of data elements. Although streams may seem somewhat similar to collections, there are many significant differences. Unlike collections, streams are not built to store data. They are used on demand to pull data elements from the data source and pass it to the pipeline to proceed with further processing. Collection, on the other hand, is an in-memory data structure. That means data elements must exist in- memory to execute add/fetch an operation on it. In a way, a stream is more concerned with the flow of data , rather than the store of data , which is the main idea of the collections. Because streams pull data on demand, it is possible to represent a sequence of infinite data elements. For example, stream operations can be plugged in to a data source that generates infinite data, such as reciprocating streams in a I/O channel. Collection, on the other hand, is limited due to the use of an in-memory store. Both streams and collections operate on a number of elements, so the requirement of iteration is obvious. A stream is based on internal iteration; a collection is based on external iteration. Let's illustrate this with the help of an example. A simple aggregate operation on a collection can be as follows: The code uses a for-loop to iterate over the list of elements. Observe the following code. There is no external iteration, although iteration is applied internally and, surprisingly, stream operation can be written with marked brevity by using lambda. The sequential structure of the external iteration is not suitable for parallel execution through multiple threads. Java provides an excellent Fork/Join framework library to leverage the use of modern multiple core CPUs. But, using this framework is not that simple, especially for beginners. Streams, however, simplify some parallel execution functionality, such as the preceding code can be written as follows for parallel execution. In this scenario, the stream is not only using internal iteration but also using multiple threads to do the filtering, multiplication, and summing operations in parallel. Using internal iteration, however, does not mean they cannot be iterated externally. The following external iteration is equally possible. Stream-related classes and interfaces can be found in the java.util.stream package. They are hierarchically arranged in the following manner. Figure 1: Arrangement of the java.util.stream package All sub interfaces of a stream have BaseStream as their base interface, which however is inherited from Autocloseable. Also, the package contains two classes—Collectors and StreamSupport—along with a few builder interfaces such as DoubleStream.builder , IntStream.builder , LongStream.builder , Stream.builder , and a Collector interface. In practice, streams are rarely used without a collection as their data source. They mostly go hand-in-hand in Java programming. Refer to the Javadoc for an elaborate description on each of the interfaces and classes in the hierarchy. The iterate() method of a stream is quite versatile and can be used in many ways. The method takes two arguments: a seed and a function. Seed refers to the first element of the stream and the second element is obtained by applying the function to the first element and so on. Suppose we want to find first ten odd natural numbers; we may write the following: This will print 1 3 5 7 9. If we want to skip the first five and then print next five odd natural numbers, we may write the code as: Now, it will print 11 13 15 17 19. We can generate some random numbers with the generate() method as follows: If we want a random integer, we may write: Java 8 has introduced many classes that return their content as a stream representation. The char() method in the CharSequence is an example. This prints first character of the each word in the sentence: PEMDAS. Streams can be directly obtained from the Arrays class as follows: Beginning Java 8 Language Features , Kishori Sharan, Apress. This is just the beginning; there can be numerous such examples. Stream APIs are excellent in minimizing the toil in programming, sometimes re- inventing the same wheel. When the API codes accompanies lambda, the result is strikingly terse. Many a line of code can be compressed within a single line. This article is an attempt to give a glimpse of what stream APIs are about and was never intended to be a comprehensive guidance. Future articles will explore many of its practical usages in more detail. 2016-07-02 00:00 Manoj Debnath

26 26 Exploring the Java String Tokenizer String tokenization is a process where a string is broken into several parts. Each part is called a token. For example, if "I am going" is a string, the discrete parts— such as "I" , "am" , and "going" —are the tokens. Java provides ready classes and methods to implement the tokenization process. They are quite handy to convey a specific semantics or contextual meaning to several individual parts of a string. This is particularly useful for text processing where you need to break a string into several parts and use each part as an element for individual processing. In a nutshell, tokenization is useful in any situation where you need to disorganize a string into individual parts; something to achieve with the part for the whole and whole for the part concept. This article provides information for a comprehensive understanding of the background concepts and its implementation in Java. A token or an individual element of a string can be filtered during infusion, meaning we can define the semantics of a token when extracting discrete elements from a string. For example, in a string say, "Hi! I am good. How about you? ", sometimes we may need to treat each word as a token or, at other times a set of words collectively as a token. So, a token basically is a flexible term and does not necessarily meant to be an atomic part, although it may be atomic according to the discretion of the context. For example, the keywords of a language are atomic according to the lexical analysis of the language, but they may typically be non-atomic and convey different meaning under a different context. The tokens are: Now, if we change the code to the following: The tokens are: Observe that the StringTokenizer class contains three constructors, as follows: (refer to the Java API Documentation) when we create a StringTokenizer object with the second constructor, we can define a delimiter to split the tokens as per our need. If we do not provide any, space is taken as a default delimiter. In the preceding example, we have used ". " (dot/stop character) as a delimiter. Note that the delimiting character itself is not taken into account as a token. It is simply used as a token separator without itself being a part of the token. This can be seen when the tokens are printed in the example code above; observe that ". " is not printed. So, in a situation where we want to control whether to count the delimited character also as a token or not, we may use the third constructor. This constructor takes a boolean argument to enable/disable the delimited character as a part of the token. We also can provide a delimiting character later while extracting tokens with the nextToken(String delim) method. We may also use delimited character as " " to mean space, newline, carriage return, and line-feed character, respectively. Accessing individual tokens is no big deal. StringTokenizer contains six methods to cover the tokens. They are quite simple. Refer to the Java API Documentation for details about each of them. The split method defined in the String class is more versatile in the tokenization process. Here, we can use Regular Expression to break up strings into basic tokens. According to the Java API Documentation: " StringTokenizer is a legacy class that is retained for compatibility reasons although its use is discouraged in new code. It is recommended that anyone seeking this functionality use the split method of String or the java.util.regex package instead. " The preceding example with StringTokenizer can be rewritten with the string split method as follows: Output: To extract the numeric value from the string below, we may change the code as follows with regular expression. As we can see, the strength of the split method of the String class is in its ability to use Regular Expression. We can use wild cards and quantifiers to match a particular pattern in a Regular Expression. This pattern then can be used as the delimitation basis of token extraction. Java has a dedicated package, called java.util.regex , to deal with Regular Expression. This package consists of two classes, Matcher and Pattern , an interface MatchResult , and an exception called PatternSyntaxException. Regular Expression is quite an extensive topic in itself. Let's not deal with is here; instead, let's focus only on the tokenization preliminary through the Matcher and Pattern classes. These classes provide supreme flexibility in the process of tokenization with a complexity to become a topic in itself. A pattern object represents a compiled regular expression that is used by the Matcher object to perform three functions, such as: For tokenization, the Matcher and Pattern classes may be used as follows: Output: String tokenization is a way to break a string into several parts. StringTokenizer is a utility class to extract tokens from a string. However, the Java API documentation discourages its use, and instead recommends the split method of the String class to serve similar needs. The split method uses Regular Expression. There are a classes in the java.util.regex package specifically dedicated to Regular Expression, called Pattern and Matcher. The split method, though, uses Regular Expression; it is convenient to use the Pattern and Matcher classes when dealing with complex expressions. Otherwise, in a very simple circumstance, the split method is quite convenient. 2016-07-02 00:00 Manoj Debnath

27 Streamline Your Understanding of the Java I/O Stream The Java I/O stream library is an important part of everyday programming. The stream API is overwhelmingly rich, replete with interfaces, objects, and methods to support almost every programmer's needs. In view of providing every need, the stream library has become a large collection of methods, interfaces, and classes with a recent extension into a new package called NIO.2 (New I/O version 2). It is easy to be lost among the stream implementation, especially for a beginner. This article shall try to provide some clue to streamline your understanding of I/O streams APIs in Java. Stream literally means continuous flow, and I/O stream in Java refers to the flow of bytes between an input source and output destination. The type of sources or destination can be anything that contains, generates, or consumes data. For example, it may be a peripheral device, a network socket, a memory structure like an array, disk files, or other programs. After all, bytes are bytes; reading data sent from a server network stream is no different than reading a local file. Similar is the case for writing data. The intriguing part of Java I/O is its unique approach, very different from how I/O is handled in C or C++. Although the data type may vary along with I/O endpoints, the fundamental approach of the methods in output and input stream is same all throughout Java APIs. There will always be a read method for the input stream and a write method for the output stream. After the stream object is created, we almost can ignore the intricacies involved in realizing the exact details of I/O processing. For example, we can chain filter streams to either an output stream or an input stream, and modify the data in the process of a read or write operation subsequently. The modification can be like applying encryption or compression or simply provide methods to convert data into other formats. The readers and writers, for example, can be chained to an input and output stream to realize character streams rather than bytes. Readers and writers can handle a variety of character encoding such as multi byte Unicode characters (UTF-8). Thus, a lot goes on behind the scenes, even if it is seemingly a simple I/O flow from one end to another. Implementing them from scratch is by no means simple and needs to go through the rigor of extensive coding. Java Stream APIs handle these complexities, giving developers an open space to concentrate on their productive ends rather than brainstorm on the intricacies of I/O processing. One just needs to understand the right use of the API interfaces, objects, and methods and let it handle the intricacies on their behalf. The classes defined in the java.io package implements Input/Output Stream, File, and Serialization. File is not exactly a stream, but stream operations are the means to achieve file handling. File actually deals with file system manipulation, such as read/write operations, manipulating their properties, disk access, permissions, subdirectory navigation, and so forth. Serialization, on the other hand, is the process of persisting Java objects into a local or remote machine. Complete delineation is out of scope of this article; instead, here we focus only on the I/O streaming part. The base class for I/O streaming is the abstract classes InputStream and OutputStream , and later these classes are extended to to have some added functionality. They can be categorized intuitively as follows. Figure 1: The Java IO Stream API Library Byte Stream classes are mainly used to handle byte-oriented I/O. It is not restricted to any particular data type, though, and can be used with objects including binary data. The data is translated into 8-bit bytes for I/O operations. This makes byte stream classes suitable for I/O operations where a specific data type does not matter and can be dealt with in binary form as well. Byte Stream classes are mainly used in network I/O such as socket or binary file operation, and so on. There are many Byte Stream classes in the library; all are the extension of an abstract class called InputStream for input streaming and OutputStream for output streaming. An example of the concrete implementation of byte stream classes is: Character Stream deals with Unicode characters rather than bytes. Sometime the character sets used locally are different, non-Unicode. Character I/O automatically translates a local character set to Unicode upon I/O operation without extensive intervention of the programmer. Using Character Stream is safe for future upgrades to support Internationalization even though the application may use a local character set such as ASCII. The character stream classes make the transformation possible with very little recoding. Character stream classes are derived from abstract classes called Reader and Writer. For example, the character stream reader that handles the translation of character to bytes and vice versa are: Sometimes, the data needs to be buffered in between I/O operations. For example, an I/O operation may trigger a slow operation like a disk access or some network activity. These expensive operations can bring down overall performance of the application. As a result, to reduce the quagmire, Java platform implements a buffered (buffer=memory area) I/O stream. On invocation of an input operation, the data first is read from the buffer. If no data is found, a native API is called to fetch the content from an I/O device. Calling a native API is expensive, but if the data is found in the buffer, it is quick and efficient. Buffered stream is particularly suitable for I/O access dealing with huge chunks of data. Data streams are particularly suitable for reading and writing primitive data to and from streams. The primitive data type values can be a String or int, long, float, double, byte, short, boolean, and char. The direct implementation classes for Data I/O stream are DataInputStream and DataOuputStream , which implements DataInput and DataOutput interfaces apart from extending FilterInputStream and FilterOutputStream , respectively. As the name suggests, Object Stream deals with Java objects. That means, instead of dealing with primitive values like Data Stream objects, Object Stream performs I/O operations on objects. Primitive values are atomic, whereas Java objects are composite by nature. The primary interfaces for Object Stream are ObjectInput and ObjectOutput , which are basically an extension of the DataInput and DataOutput interfaces, respectively. The implementation classes for Object Stream are as follows. As Object Stream is closely associated with Serialization. The ObjectStreamConstants interface provides several static constants as stream modifiers for the purpose. Refer to Java Documention for specific examples of each stream type. Following is a rudimentary hierarchy of Java IO classes. Figure 2: A rudimentary hierarchy of Java IO classes Input stream classes are derived from the abstract class java.io. InputStream. The basic operations of this class are as follows: All output stream classes are the extension of the abstract class java.io. OutputStream. It contains the following variety of operations: It may seem overwhelming at the beginning, but observe that no matter which extension classes you use, you'll end up using these methods for I/O streaming. For example, ByteArrayOutputStream is a direct extension of the OutputStream class; you will use these methods to write into an extensible array. Similarly, FileOutputStream writes onto a file, but internally it uses native code because "File" is a product of the file system and it completely depends upon the underlying platform on how it is actually maintained. For example, Windows has a different file system than Linux. Observe that both the OutputStream and InputStream provide a raw implementation of methods. They do not bother about the data formats we want to use. The extension classes are more specific in this matter. It may happen that the supplied extension classes are also insufficient in providing our need. In such a situation, we can customize our own stream classes. Remember, the InputStream and OutputStream classes are abstract, so they can be extended to create a customized class and give a new meaning to the read and write operations. This is the power of polymorphism. Filter Streams, such as PushBackInputStream and PushbackOutputStream and other sub extensions, provide a sense of customized implementation of the stream lineage. They can be chained to receive data from a filtered stream to another data packet along the chain. For example, a compressed network stream can be chained to a BufferedInputStream and then to a compressed data through CipherInputStream to GZIPInputStream and then to a InputStreamReader to ultimately realize the actual data. Refer to the Java API documentation for specific details on the classes and methods discussed above. The underlying principles of stream classes are undoubtedly complex. But, the interface surfaced through the Java API is relatively simple enough to ignore the underlying details. Focus on these four classes: InputStream , OutputStream , Reader , and Writer. This will help to get a grip on the APIs initially and then use a top-down approach to learn its extension. I suppose this is the key to streamline your understanding of the Java I/O stream. Happy learning! 2016-07-02 00:00 Manoj Debnath

28 The Top Ten Ways to Be a Great ScrumMaster By Zubin Irani , CEO of cPrime Having worked in the Agile world for more than 10 years now, I have seen teams succeed, fail and everything in between—often largely based on the competency of the ScrumMaster and his or her ability to manage teams to project completion. Following are the top ten ScrumMaster do's and don'ts derived from watching more than 250 Agile projects. It's fun to be the expert, but that isn't the ScrumMaster's job. A good ScrumMaster lets everyone else shine and focuses on making each member of the team successful. Achieving this requires listening over speaking. No matter how obvious the problem is, investigate further before commenting. You'll be surprised how often your "obvious" conclusion was wrong—and you'll be glad you kept your mouth shut. Agile and the ScrumMaster position isn't about you. It's about the Team. Focus on serving their needs, above all else. Conflict can explode with little warning. When it does, you have to find and implement an effective solution. Stay calm, and focus on the facts. Keep bringing people back to the problem, and away from placing blame and emotions. These individuals often don't know they are causing a problem, and most of the time they will pay attention and heed your guidance if they respect you. You can't let one lemon sour the Team. Talk to the Team member's manager about moving the problematic person to a more appropriate Team or position, and then talk to the Team member. Start with, "You don't seem to be happy here. It seems to me that it might work better if you…. " Most people will do it once, and learn. A second major failure gets a warning. If they hit three, it's time to move on. Use your authority in the most diplomatic way possible, but use it when the need exists. You are supposed to make everything work, which means you enforce the process. Do your job. When you get pushback, diplomatically remind people that some things are in your area of authority, and you are making the call. It's important to be objective and direct with team members and friendship can influence how you respond or make decisions, which may create resentment amongst other team members. Build a relationship with other ScrumMasters and Agile practitioners to provide an outlet as well as get unbiased input to help you tackle tough problems. You have responsibilities to your Team, to your company, to your customers, and to your conscience. There is no rule book to fall back on when these responsibilities collide. Strive for win-win solutions when you can, and strive for the best fallback solutions when win-win isn't possible. Recognize situations where you cannot accomplish anything by pushing. Ask yourself if your self-respect is worth losing your job, and then make the right decision. 2016-07-02 00:00 www.developer

29 Serverless Architectures on AWS: Monitoring Costs By Peter Sbarski with Sam Kroonenburg for Manning Publishing This article was excerpted from the book Serverless Architectures on AWS. CloudWatch is an AWS component for monitoring resources and services running on AWS, setting alarms based on a wide range of metrics, and viewing statistics on the performance of your resources. When you begin to build your serverless system, you are likely to use logging more than any other feature of CloudWatch. It will help to track and debug issues in Lambda functions, and it's likely that you will rely it on for some time. Its other features, however, will become important as your system matures and goes to production. You will use CloudWatch to track metrics and set alarms for unexpected events. Receiving an unpleasant surprise in the form of a large bill at the end of the month is disappointing and stressful. CloudWatch can create billing alarms that send notifications if total charges for the month exceed a predefined threshold. This is useful, not only to avoid unexpectedly large bills, but also to catch potential misconfigurations of your system. For example, it is easy to misconfigure a Lambda function and inadvertently allocate 1.5GB of RAM to it. The function might not do anything useful except wait for 15 seconds to receive a response from a database. In a very heavy-duty environment, the system might perform 2 million invocations of the function a month costing a little over $743.00. The same function with 128MB of RAM would cost around $56.00 per month. If you perform cost calculations up front and have a sensible billing alarm, you will quickly realize that something is going on when billing alerts begin to come through. Follow these steps to create a billing alert: Figure 1: The preferences page allows you to manage how invoices and billing reports are received. Figure 2: It's good practice to create multiple billing alarms to keep you informed of ongoing costs. Services such as CloudCheckr can help to track costs, send alerts, and even suggest savings by analyzing services and resources in use. CloudCheckr comprises several different AWS services, including S3, CloudSearch, SES, SNS, and DynamoDB (figure 3). It is richer in features and easier to use than some of the standard AWS features. It is worth considering for its recommendations and daily notifications. Figure 3: CloudCheckr is useful for identifying improvements to your system but the good features are not free. AWS also has a service called Trusted Advisor that suggests improvements to performance, fault tolerance, security, and cost optimization. Unfortunately, the free version of Trusted Advisor is limited, so if you want to explore all of the features and recommendations it has to offer, you must upgrade to a paid monthly plan or access it through an AWS enterprise account. Cost Explorer (figure 4) is a useful, albeit high-level, reporting and analytics tool built in to AWS. You must activate it first by clicking your name (or the IAM user name) in the top right-hand corner of the AWS console, selecting Cost Explorer for the navigation pane, and then enabling it. Cost Explorer analyzes your costs for the current month and the past four months. It then creates a forecast for the next three months. Initially, you may not see any information, because it takes 24 hours for AWS to process data for the current month. Processing data for previous months make take even longer. More information about the Cost Explorer is available at http://amzn.to/1KvN0g2 . Figure 4: The Cost Explorer tool allows you to review historical costs and estimate what future costs may be. The Simple Monthly Calculator is a web application developed by Amazon to help model costs for many of its services. This tool allows you to select a service on the left side of the console and then enter information related to the consumption of that particular resource to get an indicative cost. Figure 5 shows a snippet of the Simple Monthly Calculator with an estimated monthly cost of $650.00. That estimate is mainly of costs for S3, CloudFront, and the AWS support plan. It is a complex tool and it's not without usability issues, but it can help with estimates. You can click common customer samples on the right side of the console or enter your own values to see estimates. If you take the Media Application customer sample, something that could serve as a model for 24-Hour Video , it breaks down as follows: Figure 5: The Simple Monthly Calculator is a great tool to work out the estimated costs in advance. You can use these estimates to create billing alarms at a later stage. The cost of running serverless architecture often can be a lot less than running traditional infrastructure. Naturally, the cost of each service you might use will be different, but we can have a look at what it takes to run a serverless system with Lambda and the API Gateway. Amazon's pricing for Lambda is based on the amount of requests, duration of execution, and the amount of memory allocated to the function. The first one million requests are free, with each subsequent million charged at $0.20. Duration is based on how long the function takes to execute, rounded up to the nearest 100ms. Amazon charges in 100ms increments while also taking into account the amount of memory reserved for the function. A function created with 1GB of memory will cost $0.000001667 per 100ms of execution time, whereas a function created with 128MB of memory will cost $0.000000208 per 100ms. Note that Amazon prices may differ depending on the region and that they are subject to change at any time. Amazon provides a perpetual free tier with 1 million free requests and 400,000 GB-seconds of compute time per month. This means that a user can perform a million requests and spend an equivalent of 400,000 seconds running a function created with 1GB of memory before they have to pay. As an example, consider a scenario where you have to run a 256MB function five million times a month. The function executes for two seconds each time. The cost calculation follows: The total cost of running Lambda in the above example is $35.807. The API Gateway pricing is based on the number of API calls received and the amount of data transferred out of AWS. In US East, Amazon charges $3.50 for each million API calls received and $0.09/GB for the first 10TB transferred out. Given the above example and assuming that monthly outbound data transfer is 100GB a month, the API Gateway pricing is as follows: The API Gateway cost in this example is $26.50. The total cost of Lambda and the API Gateway is $62.307 per month. It's worthwhile to attempt to model how many requests and operations you may have to handle on an ongoing basis. If you expect 2M invocations of a Lambda function that only uses 128MB of memory and runs for a second, you will pay approximately $0.20 month. If you expect 2M invocations of a function with 512MB of RAM that runs for five seconds, you will pay a little more than $75.00. With Lambda, you have an opportunity to assess costs, plan ahead, and pay for only what you actually use. Finally, don't forget to factor in other services such as S3 or SNS, no matter how insignificant their cost may seem to be. 2016-07-02 00:00 www.developer

30 15 Amazing Mobile Apps for Aspiring Designers A dedicated iPad app for designers who love working on the layout. One of the core tasks of Comp CC is to let the designers create print, Web, and mobile layouts. Again, the one-tap sharing on Adobe cloud is supported by Comp CC. The add-on advantage of Adobe Comp CC is its intuitive drawing gestures. Even roughly drawn shapes are turned to crisp graphics. Get Adobe Comp CC . Comp CC supports vector shapes, colors, and text styles from Creative Cloud Libraries and Adobe Toolkit Infinite Design provides an exceptional place for creating vector graphic designs. What's more, it gives designers an opportunity to design on an unrestrictive canvas with multiple layers to work on. Looking for more liberty? Well, this app truly stands by its name—it comes with infinite canvas sizes. Get Infinite Design app for Android . Infinite Design's multiple layer designing magnifies a designer's imagination The paper is not a dedicated design app, but still much more than what a designer can ever think of. It gives you a paper tableau on your smart devices. Simple. Make notes, draw sketches, create lists. Do anything and everything. With its extensive functionality, it also matches up with the speed of your fingers; therefore, you can put anything down that inspires you on this paper. To effectively utilize this app for optimal output, you can use any of the available tools like FiftyThree Pencil, Pogo Connect Smart Pen, and Just Mobile AluPen. Get Paper for iPhone and iPad . Paper by FiftyThree offers dynamic color ranges supporting various digital pens for sketching to perfection. SketchBook provides an intuitive space for aspiring designers to draw, sketch, and paint their imagination on a digital canvas. The sheer brilliance is its ability to match the real world physical experience will dazzle you. It is possible to mimic the physical experience by using pencils, pens, markers, and brushes on paper. As far as its offerings are concerned, it comes preloaded with 10 preset brushes, a synthetic pressure-sensitivity, and multiple layers of editing options. The layer editor has 3 to 16 blending modes. It also includes a tool for symmetry and transformation. Get Autodesk SketchBook for iOS and Android Devices . Autodesk Sketchbook helps organize your artwork in its Gallery with multiple view options. Sketchworthy gives liberty to designers by providing a virtual notebook in their iOS devices. What makes Sketchworthy an important designing tool is its ability to capture anything from maps to Web pages, and photos, of course. Once captured, the designers have the liberty to choose from the variety of papers from the paper store. Get SketchWorthy for iPhone and iPad . Bundled with this app are packs for creating blueprints, graphs, to-do lists, planners, and much more. Adobe Photoshop Sketch is a flawless, vector-based digital sketchpad with multiple uses. The Strokes' scalability comes with free-hand drawing supported by 64x zoom. It allows a designer to work on finer details. Photoshop Sketch work on details, giving a platform to create complexity in an image. This picture complexity can get elevated by incorporating 10 drawing layers. Designers get the freedom to add depth and dimension. Moreover, it comes loaded with a stock of high-resolution, royalty-free images. Get Adobe Ideas for iPhone and iPad . The sure shot delight for designers is its capability to integrate with the Adobe creative suite. iDesign is one of the most active and precision-driven 2D vector design apps. Designers can make the best use of this app for making professional vector-based designs, including illustrations and technical drawings. iDesign comes equipped with sensor-active touch points. It gives the designer complete control over the design. At its artistic utility level, iDesign works purely with lines. It gives you plenty of options to choose from, including adding end points, ellipses, fills, colors, and transparency. Get iDesign for iOS and Android . For designers, iDesign is like a muse as its advanced tools provides symmetry in editing options for specifics. SwatchMatic comes with an amazing utility that every designer aspires to have. Let's put it straight—Swatchmatic is for capturing, combining, and sharing the colors you adore the most. It pumps life into static designs by allowing you to select colors from various places. You simply have to put the real-world object in front of your phone camera and voilà!! It lets you select and capture real-world color to the digital world. Get SwatchMatic for Android . Expressive freedom for the designer—it allows editing individual colors in the palette with easy sliders. Writing on photographs was never this easy. Gone are the days of adjusting the text to fit the shape and size. The uniqueness of Pathon lies in letting you put text scrawls onto an arbitrary path. Also, it gives plenty of options to select the text style, colors, and size. Get PathOn app for iPhone and iPad . The workability is simple. Just choose the picture, write text, and guide the path. A fully-functional vector based sketchpad for all the iPhones and iPads. It is the app to use if you are almost a pro designer. It comes integrated with 11 preloaded drawing tools. Intaglio explores the best of the editing scope, apart from just designing. Get Intaglio Sketchpad for iPhone and iPad . The vector editing options comprise of group editing, layer editing, customizable pre-loaded graphics, and image morphing. Nothing can be more amazing for aspiring artists than to bring his/her imagination onto canvas this easily. One just needs to drift their fingers briskly, and that is it. Although it seems like a fun-time app, it is possible to get the next big design idea while doodling your way through Doodle Buddy. The UI is also one of the amazing factors while using Doddle Buddy. It is possible to undo your last stroke if you require changes in the design. To start over, you just need to shake the device. Get Doodle Buddy for iOs. It is possible to connect to a network and draw along with your friends online. A boon for mobile app designers is this Marvel App. It lets you turn your design ideas into reality within just a few minutes. Marvel turns your sketches into an app demo. It is a simple, 3-step process where you just need to draw the screen on your paper, take a picture with the help of Marvel, and then just sync them. Get MarvelApp for iOS and Android . This app is a perfect gateway for aspiring designers to create their initial mobile app prototype. A very simple app with great utility, iRuler app simply displays a virtual ruler on your device. It is an essential app for aspiring designers who want to take precise measurements on the go for real-world objects. Get iRuler App for iPhone and iPad . Designers can efficiently use their fingers to scroll this infinite ruler. LooseLeaf sets you loose with your ideas. Aspiring designers run through many ideas on a day-to-day basis. With LooseLeaf, it becomes easy to craft a scratch anytime, anywhere, with ease. It is with LooseLeaf a designer can draw diagrams dramatically faster. Moreover, it is easy to cut and crop the designs with scissors tool. Get LooseLeaf for iPhone and iPad . LooseLeaf is a no-frill design app for aspiring designers to set their hands free on this dry-erase board. What is the design if it only sits on your mobile phone? Aspiring designers need to put their design to prospective clients so that designs get wings. Behance is an intuitive app platform that allows designers to put their portfolio over the Internet and share it with people. Get Behance for iOS and Android . So, here it is, the space to put your design to best use. Make a living out of your design—get noticed. Well, these are just a few of the applications out of the infinite gamut. However, it then totally depends on the designer and the utility that he/she is looking for in a mobile app. The time has come to break the norm like the way design changed its platform over a periodâfrom paper to computers; now is the time to sail through the mobile way. Mobile technology has swept away a significant amount of the desktop and laptop market. Moreover, going with the same flow, it does not come up as a surprise if aspiring designers start using mobile applications as dedicated software in the near future. This way, one can seriously save on various resources like space and money. By Shahid Abbasi One of the most beautiful things about design is that it has the power to captivate your thoughts with the sheer vision of your eyes. Moreover, that is why we, at the Design Instruct, strive to drive the designing community ahead. With every breed of a new generation comes innovation in design and today's world is no different. Every designer aspires to create a masterpiece someday. Technology provides one of the most amazing platforms. Moreover, especially now with mobile phones, it is possible to create stunning visuals. Mobile apps offer wonderful tools to pull out your latent designing skills. I am here to share with you some of the most amazing mobile apps for designers. Shahid Abbasi is a marketing consultant with Peerbits , one of the top iPhone app development companies. He creates highly polished iOS apps and also has expertise in Android app development. Shahid likes to keep busy with his team, and to provide top-notch mobility solutions for enterprises and startups. Your name/nickname Your email Subject (Maximum characters: 1200). You have characters left. 2016-07-02 00:00 www.developer

31 Elastic Leadership: Review the Code By Roy Osherove This article was excerpted from the book Elastic Leadership. Robert Martin (Uncle Bob) has been a programmer since 1970. He is the Master Craftsman at 8th Light Inc, and the author of many books including The Clean Coder, Clean Code, Agile Software Development: Principles, Patterns, and Practices , and UML for Java Programmers. He is a prolific writer and has published hundreds of articles, papers, and blogs. He served as the Editor-in-Chief of the C++ Report, and as the first chairman of the Agile Alliance. Here is his advice for new software team leaders and my feedback on that. One of the biggest mistakes that new software team leaders make is to consider the code written by the programmers as the private property of the author, as opposed to an asset owned by the team. This causes the team leaders to judge code based on its behavior rather than its structure. Team leaders with this dysfunction will accept any code so long as it does what it is supposed to do, regardless of how it is written. Indeed, such team leaders often don't bother to read the other programmers' code at all. They satisfy themselves with the fact that the system works and divorce themselves from system structure. This is how you lose control over the quality of your system. And, once you lose that control, the software will gradually degrade into an unmaintainable morass. Estimates will grow, defect rates will climb, morale will decline, and eventually everyone will be demanding that the system be redesigned. A good team leader takes responsibility for the code structure as well as its behavior. A good team leader acts as a quality inspector, looking at every line of code written by any of the programmers under their lead. A good team leader rejects a fair bit of that code and asks the programmers to improve the quality of that code. A good team leader maintains a vision of code quality. They will communicate that vision to the rest of the team by ensuring that the code they personally write conforms to the highest quality standards and by reviewing all of the other code in the system and rejecting the code that does not meet those exacting standards. As teams grow, good team leaders will recruit lieutenants to help them with this review and enforcement task. The lieutenants review all the code, and the team leader falls back on reviewing all the code written by the lieutenants and spot checking the code written by everyone else. Code is a team asset, not personal property. No programmer should ever be allowed to keep their code private. Any other programmer on the team should have the right to improve that code at any time. And the team leader must take responsibility for the overall quality of that code. The team leader must communicate and enforce a consistent vision of high quality and professional behavior. Speaking from the influence forces point of view, Uncle Bob advocates that we influence the team by creating environmental rewards and punishments for writing good code (the team leader says it has to be done, or else…) which can definitely affect a positive outcome. Here's an apparent paradox, though. You'd be hard-pressed to find any team leader that disagrees with any piece of this text, and at the same time it is extremely difficult to find team leaders who actually practice what they claim to preach. This is only an appranet paradox, however. Once we look at things from the systems viewpoint, things begin to make more sense. A good way to look at the systems view is to think about the influence forces we just discussed to try to dissect why so many team leaders don't practice what they preach. To start, let's choose one core behavior we'd like our team leader to practice: -- "A good team leader acts as a quality inspector, looking at every line of code written by any of the programmers under their lead. " Let's look at each force, and try to imagine a scene from a real-life "enterprise" organization setting. OK, socially, when working with peers and colleagues, things seem to be getting a bit murky, and that team member has a good point: We are under some serious time pressure. You could say we are in survival mode, so should we really refactor that code? On top of this, other team leaders seem to be doing just fine (they do bitch quite a bit about the quality of the products, but hey, don't we all?) without this code quality looming over their heads. So maybe the problem is more systematic? Let's look at the last two factors. These last two points complete our "systems" perspective. They point to a serious flaw: The team leader doesn't have the incentive to do the right thing, or worse, has the incentive to do the wrong thing, or be berated by the managers. Without solving this issue, as well as the social issues, it will be very difficult to see many team leaders taking that extra step towards the things they really believe in. Uncle Bob is asking team leaders to influence the team in the right direction by changing environmental forces. But getting the team leaders to do this pushing might lead to environmental forces in the first place, which is one of the reasons why so many leaders today talk the talk, but don't really walk the walk. What would you change in the place you work, at the system level, to enable team leaders to "do the right thing"? What is the first step to making these changes happen? For example "I'll set up a meeting with the CTO about this" or "I will do a presentation to X folks about this" might be a good step, but your situation may need different steps first. 2016-07-02 00:00 www.developer

32 Tips for MongoDB WiredTiger Performance Tuning By Dharshan Rangegowda , founder of ScaleGrid.io. MongoDB 3.0 introduced the concept of pluggable storage engines. Currently, there are a number of storage engines available for Mongo: MMAPV1, WiredTiger, MongoRocks, TokuSE, and so forth. Each engine has its own strengths and you can select the right engine based on the performance needs and characteristics of your application. Starting with MongoDB 3.2.x, WiredTiger is the default storage engine. WiredTiger is the most popular storage engine for MongoDB and marks a significant improvement over the existing default MMAPv1 storage engine in the following areas: In the rest of this article, I'll present some of the parameters you can tune to optimize the performance of WiredTiger on your server. The size of the cache is the single most important knob for WiredTiger. By default, MongoDB 3.x reserves 50% (60% in 3.2) of the available memory for its data cache. Although the default works for most applications, it is worthwhile to try tuning this number to achieve the best possible performance for your application. The size of the cache should be big enough to hold the working set of your application. Figure 1: The WiredTiger cache size MongoDB also needs additional memory outside of this cache for aggregations, sorting, connection management, and the like, so it is important to make sure you leave MongoDB with enough memory to do its work. If not, there is a chance MongoDB can get killed by the OS Out of memory (OOM) killer. The first step is to understand the usage of your cache with the default settings. Use the following command to get your cache usage statistics: Here is an example of output from calling the WiredTiger cache command: The first number to look at is the percentage of the cache that is dirty. If the percentage is high, increasing your cache size might improve your performance. If your application is read heavy, you can also track the "bytes read into cache" parameter. If this parameter remains constantly high, increasing your cache size might improve your read performance. The cache size can be changed dynamically without restarting the server by using the following command: If you would like the custom cache size to be persistent across reboots, you also can add the config instruction to the conf file: Figure 2: Read and write tickets WiredTiger uses tickets to control the number of read/write operations simultaneously processed by the storage engine. The default value is 128 and works well for most cases. If the number of tickets falls to 0, all subsequent operations are queued, waiting for tickets. Long-running operations might cause the number of tickets available to decrease, reducing the concurrency of your system. For example, if your read tickets are decreasing, there is a good chance that there are a number of long running unindexed operations. If you would like to find out which operations are slow, there are third-party tools available. You can tune your tickets up/down depending on the needs of your system and determine the performance impact. You can check the usage of your tickets by using the following command: Here is a sample output You can change the number of read & write tickets dynamically without restarting your server by using the following commands: Once you've made your changes, monitor the performance of your system to ensure that it has the desired effect. Dharshan Rangegowda is the founder of ScaleGrid.io, where he leads products such as ScaleGrid, a MongoDB hosting and management solution to manage the lifecycle of MongoDB on public and private clouds, and Slow Query Analyzer, a solution for finding slow operations within MongoDB. He can be reached at @dharshanrg. *** This article was contributed *** 2016-07-02 00:00 www.developer

33 Testing Controllers in Laravel with the Service Container By Terry Rowland , Senior Backend Web Developer at Enola Labs. Automated testing is critical to my method of development. When I first started off using Laravel, the way it did dependency injection via the constructor and the method for controllers was like magic to me and caused quite a bit of confusion in testing. So, with this post, I hope to clear up some of that confusion by explaining how the controller and Service Container work together and how to leverage the container for testing purposes. I'm going to be showing all my examples in Laravel 5.2.31 and I'll be using Laravel Homestead (Vagrant) to build everything. Here is a link to the repository where you can grab the code. As a side note, if you are familiar with Laravel, don't run the migrations. In this example, I purposely want errors to occur to show we shouldn't be hitting the DB. First, I will create the route I want to use in the app/routes.php file. Now, I'm going to use a repository, below, specifically for example purposes, but I wouldn't suggest you use one in this case. Typically, repositories are reserved for more complex queries and logic, so using one here would more than likely be overkill. So, let's create a folder under app, called "Repositories", and place the following code in the file named "UserRepository.php". This code will work, but I don't plan on using it. I'll also say that this code would be tested if it were going into one of my applications, but for the purpose of this, I want to make sure the "interface" and intention of the class is fleshed out first. Next, let's build the controller we will be using for the route we created. In the app/Http directory, create a file named UsersController.php and place the following code in it: This is simple code here—just an index function plus leveraging Laravel Service Container to get a built-out UserRepository so we can return its results. Next, I will jump over to my test and start the process of building this out. My focus here is on creating the basic shell of what I know is minimally needed to make everything work. In the Vagrant VM, and in the application directory, if I were to run the command "phpunit", I would actually (attempt to) hit a database. You would get a green, but behind the scenes you are actually getting an error that was swallowed up. If you would like to see the error, drop in this chunk of code: after the: This is expected. I purposely did NOT run the migrations, as mentioned earlier, but if you look at the error (or look in the storage/logs/laravel.log), you will see there's a query exception about the users table not existing. So, with this information we KNOW the repository is actually being used and it's actually working—attempting to hit the DB with a query on the users table. Be sure to remove that dd code because it will no longer be necessary and will interfere with our test later. So, now we can work up a little more code in the test: Let me break this down a little. I like being explicit in my tests, so I added the mockedUserRepo variable for clarity and type hinting. For example, down in the testing function when I use the arrow in my IDE (PHPStorm), it shows all the possible functions I can use from the repository AND the mock because of how I used the pipe character (|) in the comment. Second, I built up a setup function. This function must be public and it's VERY important to call the parent::setup() because if we don't, the Laravel application will not be built and we will get several errors. Third, I created a mocked version of the repository with the code $this- >mockedUserRepo = Mockery::mock(UserRepository::class), then assigned it to the Service Container as the same name (the first parameter —"UserRepository::class") with the code app()- >instance(UserRepository::class, $this->mockedUserRepo). This tells the Service Container to create an entry in the container with the "name" of "App\Repositories\UserRepository", and assign the given class to that name. In this case, it's the mocked UserRepository. Now, even if the name "App\Repositories\UserRepository" already exists in the container, it will replace it with the mocked version. Also note, this will only happen for each test. So, if I made a new test and didn't mock the repository, it would go back to hitting the database. Finally, we can add the code that is going to do the "checking" that we are using the mock properly: If you aren't familiar with Mockery , it's okay; you can learn more here. But, what's great about Mockery is that it's super simple to read. Basically, we should expect the all function to be called once, with no arguments given to the function and then return null as a result of that "mocked" call. Now, if we run "phpunit" in the terminal, we should get something similar to this: Figure 1: The result of running "phpunit" in the terminal Notice there are 0 assertions; this is expected, but a little confusing. There are assertions going on, but only in Mockery at this point. If you were to comment out the you would see Mockery bark, saying "Method Mockery_0_App_Repositories_UserRepository::all() does not exist on this mock object". This would be swallowed up again, but by using the method mentioned earlier, using you can see it. And, as one last precaution of not getting one of these silent errors, you can replace: with: And now, any time there's a "break" in the test, this will catch it because we have the mock returning null! When I first picked up Laravel, this threw me for a loop. I hope I was able to help demystify some of Laravel's magic in the controllers. P. S. The method will also work just the same if the constructor is used for injection. You can try it by replacing your controller code with this code: Good luck out there! Terry Rowland is a Senior Backend Web Developer at Enola Labs , a custom Web and mobile application development company located in Austin, Texas. *** This article was contributed for exclusive publication on this site. *** 2016-07-02 00:00 www.developer

34 MapR Spyglass Initiative Eases Big Data Management Hadoop distribution company MapR is rolling out a new management capability to help enterprises get a handle on their big data deployments. Designed to work with the MapR Converged Data Platform, which is composed of several open source big data technologies, MapR's Spyglass Initiative offers deep visibility and customizable dashboards that can help increase user and administrator productivity, the company said in an announcement released at the Hadoop Summit this week. Because enterprises already have monitoring and other tools for managing their infrastructure, the Spyglass Initiative is offering APIs to enable organizations to integrate this tool into their existing sets, according to Dale Kim, senior director of product marketing at MapR, who spoke to InformationWeek in an interview. The visibility into the converged environment covers all the components of the MapR Converged Data Platform including NoSQL, Hadoop, and . [Where is Hadoop headed? Read Hadoop Creator Cutting Talks Big Data Past, Present, and Future .] "This is important for a lot of our customers who are continuing to expand their deployments with new use-cases, more data, and more users," Kim said. "They want to get an enhanced view and [a] better handle [on] how they want to grow their clusters and allocate resources to best take advantage of data. " MapR calls the Spyglass Initiative "a multi-release effort" that will roll out over several quarters. MapR calls Phase 1 MapR Monitoring and is rolling out the initial piece this summer. It includes node/infrastructure monitoring, cluster space utilization monitoring, YARN/MapReduce application monitoring, and service daemon monitoring. Administrators can customize dashboards to visualize the metrics they want on both desktops and mobile devices. They can also access the analytics of log data. MapR said that these dashboards are defined with JSON, that they are easy to export and import in Grafana and Kibana, and that they integrate with other tools via REST API. MapR has also announced the launch of the MapR Community Exchange , an online community of MapR users where they can share and collaborate on best practices. The community was launched three months ago, Kim told InformationWeek, and is designed to enable users to share use-cases, dashboards, code, tutorials, videos, and demos. One of the challenges for enterprises looking to deploy open source technologies can be that each technology has its own release schedule, and new releases may not have been tested for interoperability with older releases. That's a reason why enterprise customers may often prefer commercial software releases. To address this challenge, MapR is introducing the Ecosystem Pack. Kim said this Pack is a selected set of stable and popular components from the open source ecosystem that are fully tested to be interoperable and are available via installer or package. These Ecosystem Packs will be released on a quarterly basis with updates coming monthly to address open source project bug fixes. The MapR Ecosystem Pack version 1.0 is slated for release in August, which is also when the MapR release 5.2 will be issued. MapR updated its Converged Data Platform with support for Docker containers and additional security features in March. 2016-07-02 14:06 Jessica Davis

35 SMEs move into cyber criminals’ crosshairs The contribution made to the economy by small and medium-sized enterprises (SMEs) is impossible to ignore. In the UK, SMEs represent 99.3% of all private sector businesses, contributing £1.6tn to the nation’s economy each year, according to the Department of Business Innovation and Skills. In the light of these numbers, it should not be surprising that SMEs are now the top target for cyber criminals. Many SMEs, however, are still blissfully unaware of this fact – 82% of companies still believe they are too small to be targeted by cyber criminals. Yet the reality is that 92% of hacking incidents in 2014 were carried out against SMEs. According to the Federation of Small Businesses (FSB), smaller firms in the UK are targeted seven million times a year, costing the national economy £5.26bn. The amount of damage done by these breaches is rising, with the worst breaches costing up to £310,800 each in 2015, up from £115,000 in 2014, according to a recent survey published by digital economy minister Ed Vaizey. While about 23% of SMEs have caught on to the potential risk posed by cybercrime, too many still rely on outdated technology that provides only perimeter security and completely ignore file-based threats. As these sorts of attacks make conventional security methods utterly useless, an increasing number of hackers are seeing them as their most valuable tool. According to a survey by the Institute of Directors , nine out of 10 business leaders believe that cyber security is important yet only half had a formal strategy in place to actually protect themselves from threats. 2016-07-02 14:36 Chris Dye

36 Security Think Tank: Biometrics have key role in multi-factor security In offering an additional method of authentication , biometrics provide an extra factor of security. This represents a significant opportunity for organisations to reduce their reliance on traditional passwords and their inherent flaws, not least of which is that users write them down. However, although biometrics accordingly offer an attractive proposition, there are limitations. First, biometrics may not be secret. For example, fingerprint authentication is the most popular biometric method, yet people’s fingerprints are everywhere. Second, biometric data is personally sensitive, and the handling of this data represents a significant risk in itself. When looking at the privacy of biometric data, it is important to understand how it tends to be used. A scan will take specific data points and record them in a format that is appropriate for that supplier. The data should then be encrypted so that if it is subsequently compromised and decrypted it is likely to be of limited use. More dangerous is when more identification information than necessary is taken – full fingerprints, full iris scans, complete voice analysis, etc. If this information is compromised, then a much larger data set may be leaked, which could be used to defeat other authentication schemes reliant on that particular biometric attribute. 2016-07-02 14:36 Richard Hunt

37 Enterprises: Tear Down Your Engineering Silos Silos stifle creativity and make it difficult to work on collaborative projects, even with a person sitting right next to you in the office. What's an engineer, IT pro, or CIO to do? Will Murrell, a senior network engineer with UNICOM Systems, knows a thing or two about silos. UNICOM develops a variety of software and other tools to work with IBM's mainframe, Microsoft Windows, and Linux. Murrell recently talked with InformationWeek senior editor Sara Peters about a new breed of engineers trying to break down corporate barriers for good. 2016-07-02 14:36 www.informationweek

38 IBM Adds New Bluemix OpenWhisk Tools for IoT Development IBM added new tools for its Bluemix OpenWhisk serverless computing platform that utilizes Docker. OpenWhisk also features user interface updates. IBM has announced a set of new tools for its Bluemix OpenWhisk event-driven programming model, which uses Docker containers. The new tools will enable developers to build intuitive applications that can easily connect into the Internet of things (IoT), as well as tap into advanced services such as cognitive, analytics and more—without the need to deploy and manage extra infrastructure, according to IBM. "What OpenWhisk allows a developer to do is without any server infrastructure they upload their snippet of code, they choose when they want that code to run—like in response to something changing in the database in the cloud, or someone calling a Web URL—and then when that event occurs, the code gets run and IBM will auto-scale it for them," Mike Gilfix, vice president of Mobile & Process Transformation at IBM, told eWEEK . "So we make sure that it scales to as much demand as they need and they only pay for the compute capacity that they need at the time that the code runs," he said. Announced at DockerCon 2016 , IBM's new OpenWhisk tools—NPM Module and Node-RED—will enable developers to more rapidly build event-driven apps that automatically execute user code in response to external actions and events, according to the company. Moreover, IBM also plans to roll out new updates to the OpenWhisk user experience to make it easier for developers, including step-by-step workflows, new wizards to configure third-party services and feeds, and a new editor to manage sequences of actions, said Andrew Hately, CTO of IBM Cloud Architecture. Node-RED is IBM's open-source IoT tool for creating event-driven applications. It enables developers to start prototyping their ideas without having to first write code. Node-RED can invoke triggers and actions within OpenWhisk, giving apps access to Watson analytics, the IBM IoT platform and a host of other Bluemix services. Hately said IBM has been working to make OpenWhisk more intuitive for people developing in whatever programming language they want so they can benefit from the event-driven, serverless style of development. "A lot of this is just continuing the drumbeat of making this more consumable to developers working in the polyglot, language-of-choice-style of development," he said. With that in mind, IBM has continued with its first-class support of Node.js because of its popularity for IoT and device developers, Hately said. "On the Node side we tie into our Node-RED platform," he said. "This is all about taking multiple open technologies that are getting large developer communities and continuing to enhance them and better integrate them. IoT is probably the biggest example of people wanting to do very, very lean, message-based integrations. " "Within the node community, we have a very large contingent of Node.js users," said Todd Moore, vice president of Open Technology at IBM. "And we knew we could make things much easier for them. We see Node as one of the dominant languages within Bluemix these days. More than half of what we see deployed [on Bluemix] is using Node. " 2016-07-02 13:36 Darryl K

39 Eclipse Updates Four IoT Projects, Launches a New One The Eclipse Foundation announced new releases of four open-source IoT projects to accelerate IoT solution development. The Eclipse Foundation , which has been leading an effort to develop open-source technologies for Internet of things application development , announced that the Eclipse Internet of Things (IoT) Working Group has delivered new releases of four open-source IoT projects the group initiated over a year ago. The four projects, hosted at the Eclipse Foundation, are Eclipse Kura 2.0, Eclipse Paho 1.2, Eclipse SmartHome 0.8 and Eclipse OM2M 1.0. These projects are helping developers rapidly create new IoT solutions based on open source and open standards. "We are certain that the Internet of Things will only be successful if it is built on open technologies," Eclipse Foundation Executive Director Mike Milinkovich said. "Our goal at Eclipse is to ensure that there is a vendor- neutral open source community to provide those technologies. " Eclipse IoT is an open-source community that provides the core technologies developers need to build IoT solutions. The community is composed of more than 200 contributors working on 24 projects. These projects are made up of over 2 million lines of code and have been downloaded over 500,000 times, Eclipse officials said. Moreover, the Eclipse IoT Working Group includes 30 member companies that collaborate to provide software building blocks in the form of open- source implementations of the standards, services and frameworks that enable an open Internet of things. In addition to updating four of its existing IoT projects, Eclipse also proposed a new one. Eclipse Kapua is an open-source project proposal from Eurotech to create a modular integration platform for IoT devices and smart sensors that aims to bridge operation technology with information technology, Milinkovich said. Eclipse Kapua focuses on managing edge IoT nodes, including their connectivity, configuration and application life cycle. It also allows aggregation of real-time data streams from the edge, either archiving them or routing them toward enterprise IT systems and applications. "As organizations continue to implement IoT solutions, they are increasingly turning to Eclipse IoT for open-source technologies to implement these solutions," Ian Skerrett, vice president of marketing at the Eclipse Foundation, told eWEEK. "For instance, Eclipse Paho has become the default implementation for developers using MQTT [formerly MQ Telemetry Transport], and Eclipse Kura significantly reduces the costs and complexity of implementing an IoT gateway. It is clear open source will be a major force in the Internet of things and Eclipse IoT has become significant source of open-source technology for IoT. " Eclipse Paho provides open-source client implementations of the MQTT and MQTT-SN messaging protocols. The new Paho 1.2 release includes updates to existing Java, Python, JavaScript, C,. NET, Android and Embedded C/C++ client libraries. Improvements in the new version include automatic reconnect and offline buffering functionality for the C, Java and Android clients; WebSocket support for the Java and Python clients; and a new Go Client, which is a component for Windows, Mac OS X, Linux and FreeBSD. Paho 1.2 is now available. 2016-07-02 13:36 Darryl K

40 40 Codenvy's Language Server Protocol Reduces Programmer Envy Codenvy, Red Hat and Microsoft collaborate on new language protocol for developers to integrate programming languages across code editors and IDEs. Codenvy, Microsoft and Red Hat announced on June 27 the adoption of a language server protocol project to provide a common way to integrate programming languages across code editors and integrated development environments (IDEs). The companies announced the new protocol during the opening general session of the DevNation 2016 conference in San Francisco. The project originated at Microsoft, who introduced it to the Eclipse Che IDE platform project, announced earlier this year at the EclipseCon conference in Reston, Va. The new protocol extends developer flexibility and productivity by enabling a rich editing experience within a variety of tools for different programming languages. "Historically, most programming languages have only been optimized for a single tool," Tyler Jewell, Codenvy CEO and Eclipse Che project lead, said in a statement. "This has prevented developers from using the editors they know and love, and has limited opportunities for language providers to reach a wide audience. With a common protocol supported by Microsoft, Red Hat and Codenvy, developers can gain access to intelligence for any language within their favorite tools. " Jewell told eWEEK the "dirty problem" with development tools for the past decade has been that developers had to choose a programming language and then be stuck with the tooling available for that language—because the tooling capabilities are always bound to proprietary APIs and componentry that changes for each programming language. "So if you wanted to change programming languages, you generally had to change your IDE," he said. "And if you have an IDE that you like, there's generally not an easy way to get multiple programming languages supported on it. " However, the new Language Server Protocol makes it possible for any IDE to work with any programming language. So with that, developers can choose their tools and work with any programming language, and programming language authors can write their language as they see fit. Jewell said the Language Server Protocol is an open-source project that defines a JSON-based data exchange protocol for language servers, hosted on GitHub and licensed under the creative commons and MIT licenses. By promoting interoperability between editors and language servers, the protocol enables developers to access intelligent programming language assistants—such as find by symbol, syntax analysis, code completion, go to definition, outlining and refactoring—within their editor or IDE of choice, he said. The first two tools that are supporting this capability are Eclipse Che—the next-generation Eclipse IDE—and Microsoft's , Jewell said. Codenvy helped to achieve Eclipse Che support, and Microsoft, as originator of the protocol, put its engineers to work to get VS Code to support it. "The Eclipse Che team and Red Hat have also announced they're adopting Visual Studio Code's Language Server Protocol—an open protocol that enables some of the rich editing features in VS Code," Joseph Sirosh, corporate vice president of the Data Group at Microsoft, said in a blog post. "This shows that the open-source contributions from VS Code are being adopted by tool and language providers, giving developers the flexibility to pair their favorite language with their favorite tools. " "We have defined the common language server protocol after integrating the OmniSharp for C# and TypeScript servers into VS Code," Erich Gamma, a Microsoft Distinguished Engineer and leader of the Visual Studio Code project, said in a statement. "Having done a language server integration twice, it became obvious that a common protocol is a win-win for both tool and language providers: in this way, any language provider can make their language support available so that it is easily consumable by any tool provider. " Before joining Microsoft in 2011, Gamma was a distinguished engineer at IBM where he was a key leader and contributor to the Eclipse platform and a leader of the Eclipse Java Development Tools project. With the Language Server Protocol, programming language providers can support multiple tools across a variety of operating systems. And the project has created a language server registry, where language servers are published as part of a global registry, built by Codenvy as an Eclipse project and hosted by the Eclipse Foundation, to make language servers discoverable for any tool to consume, Jewell said. 2016-07-02 13:36 Darryl K

41 Box Shuttle Offers Route To Cloud Box plans to help enterprise customers migrate content to its cloud data centers with a service offering called Box Shuttle. For the past few years, explained CEO Aaron Levie in a phone interview, Box has been introducing technologies that help customers manage their data in regulated environments, like Box Keysafe, for managing encryption keys, Box Governance, for managing data retention and legal holds, and Box Zones, for managing data residency requirements. "But there's always been one nagging holdup for large enterprises as they want to go retire legacy systems ," said Levie. "You still have this challenge of how do I move large amounts of data, especially data that's organized in a very particular way, that deals with complex workflows in a legacy environment. And we weren't giving customers a solution to go do that. " Box Shuttle, introduced as beta software on Wednesday with general availability planned this fall, aims to pave the path by which enterprises can move data to Box's cloud, where (ideally) collaboration, sharing, and management of files becomes easier, and enhances organizational productivity. Box Shuttle allows companies to move data from services like Documentum, SharePoint, and OpenText, from legacy network file shares, or from other cloud solutions into the Box environment. By offering to help companies adopt its services, Box is travelling a well- trodden road. Google built a similar migration service in 2010, Google Apps Migration for Microsoft Exchange, to help companies shift data from Microsoft Exchange to Google Apps. And it built migration tools to bring Lotus Notes and BlackBerry Enterprise Server messages to Google Apps. In 2013, Microsoft built a tool to help Gmail users migrate to Outlook. But Box Shuttle's service component is a step beyond self-service migration tools. "It's both a professional services offering, where our people will come in and help manage the migration of data from legacy environments, and software that does the underlying migration work," said Levie. Box's service organization, Box Consulting, debuted in 2013. The company says it will provide individualized migration plans for customers, which include content analysis and lifecycle assessment, mapping of user permissions and attributes, and progress tracking under the customer's control. [See 8 Ways Cloud Storage Delivers Business Value .] "Our belief is that in the next three to five years, there's no reason why a customer should have to have a data center or document management system or servers in their legacy environment. They should be able to move all of that to the cloud," said Levie, who noted that Box now has more than 62,000 enterprise customers, including 59% of the Fortune 500. Levie says he sees commitment to the cloud as a point of differentiation with the company's competitors like Dropbox, Google, and Microsoft. During the company's Q1 2017 conference call for investors in June, he said that Box is the only enterprise content management platform that's completely cloud-based, which allows it to deliver innovations rapidly. Levie's claim hinges upon the extent to which competing cloud services qualify as "enterprise content management platforms. " Perhaps more salient to Box's success is the company's willingness to seek partnerships and product integrations with companies like Adobe, IBM, Microsoft, and Salesforce, among others. Providing collaboration services seems to work best for companies that actually collaborate. 2016-07-02 12:06 Thomas Claburn

42 IBM Opens Blockchain-Oriented, Bluemix Garage In NYC In the digital economy, blockchain transactions are believed likely to replace many existing electronic transactions and provide a hard–to-crack record of the event that is captured in multiple locations. Anticipating a new generation of blockchain-based financial systems, IBM is opening a Bluemix Garage in New York City in hopes of attracting future blockchain developers to its Bluemix cloud. Blockchain was the innovation captured in the implementation of Bitcoin, where the execution of an electronic transaction also became its accounting record. As one transaction follows another, a chain of such records is built up on multiple computers that can be reconstructed by different participants in the chain. The process provides a distributed general ledger that's and hard to tamper with from the outside. A whitepaper produced by the IBM Institute for Business Value cites the benefits that blockchain systems will bring to their users, including improving the security and integrity of transactions while reducing the friction involved in completing them. The paper goes so far as to say that in the future, blockchain transactions will allow organizations to reorganize into institutions capable of more fluid changes and exchanges with other organizations: Blockchain technology has the potential to obviate intractable inhibitors across industries. As frictions fall, a new science of organization emerges, and the way we structure industries and enterprises will take novel shape. An implementer of blockchain could produce new mobile banking and wealth management applications. Mizuho Financial Group in Tokyo recently announced a pilot project to test blockchain as a means of virtual currency settlements. The pilot came out of the IBM garage in Tokyo. It's exploring how payments in different currencies can be quickly settled, potentially leading to the launch of new financial services, according to IBM's June 28 announcement. The Mizuho project makes use of Hyperledger open source code. Hyperledger is a blockchain-supporting project hosted by the Linux Foundation. Blockchain is now also the topic of developer conferences, such as the Fintech conference in Washington, D. C., on Aug. 2. IBM emphasizes application development skills, web development, transaction systems, use of analytics, cognitive computing, and advanced IBM systems such as Watson at its garage facilities. The Bluemix Garage will be established at 315 Hudson St. in SoHo at a campus run by Galvanize, a technology education service. The area is already the home of many of the city's technology startups. Galvanize advertises that its courses will turn out a data scientist or financial technology expert in 12 months of full-time coursework. IBM opened a Bluemix Garage in San Francisco last year at a building occupied by Galvanize. Big Blue has also added Bluemix garages in Toronto, Tokyo, London, Nice, and Singapore. Company spokesmen have said in the past they plan to open one in Melbourne, Australia, as well. [Read IBM Opens Fourth Bluemix Garage in France .] The garage will also include access to consultants with expertise in IBM Design Thinking , IBM's methodology for moving from creative idea through iterative product design and into production. The garage is also a place where developers can test drive the Bluemix cloud. They have access to tools, open source code, and IBM software. IBM would be happy to see more developer activity on Bluemix at a location close to its Watson AI system headquarters in New York. It's also been a partner with the city in encouraging startups to use the city's Digital NYC platform , where infant companies can get connection services and a chance to collaborate with 8,000 other startups already using it. At its New York garage, IBM wants "to advance the science of blockchain, helping to remove complexity and make it more accessible and open. Financial services, supply chains, IoT, risk management, digital rights management and healthcare are some of the areas that are poised for dramatic change using blockchain," according to this week's announcement. 2016-07-02 11:05 Charles Babcock

43 7 Days: A week of Windows 10 wonders, block-buster movies, and a nutty name for Android 7 Days is a weekly round-up of the Editors' picks of what's been happening in the world of technology - written with a dash of humor, a hint of exasperation, and an endless supply of (Irish) coffee. Another action-packed week has come to an end, filled with exciting announcements, juicy rumors, and a few moments that raised an eyebrow or two. As ever, 7 Days is here to walk you through what's been going on across the tech world this week - there's lots to get through, so grab a glass/mug/stein of something delicious, and get comfy. Sadly, we begin this week with terrible news from the United States, where the driver of a Tesla Model S was killed in a horrifying accident while using the car's self-driving Autopilot mode. While some of the circumstances of the crash have been established, investigations are ongoing - but it's already raised concerns about the use of self-driving automotive technology on public roads. A nasty new strain of ransomware was discovered, known as 'Bart' - a variant of earlier Locky and Dridex strains. Notably, Bart does a system language check prior to encrypting a user's files - and the process is automatically cancelled if the language in use on the computer is found to be Russian, Ukrainian or Belarusian. Many were hoping that Apple would release fresh betas this week of the next major updates to its operating systems, which it unveiled at its Worldwide Developer Conference a few weeks ago. No such luck. Instead, it released the fourth betas of the much smaller updates for its current OS versions : iOS 9.3.3, OS X 10.11.6 and tvOS 9.2.2, as well as beta 1 of Server 5.1.6. There have been all sorts of rumors flying around lately about Apple's future product plans, including a waterproof Apple Watch 2, a new color for the next iPhone, and more . After discontinuing its Thunderbolt Display five years after its launch, Apple is believed to be developing a new version with an integrated GPU . And Apple was recently awarded a patent for a feature that would actively block people from using their phones to take photos or record videos at concerts. Hewlett Packard Enterprise was awarded damages of $3 billion from Oracle , four years after successfully suing the company for withdrawing its support for HP's Itanium servers. HP, now a separate business from Hewlett Packard Enterprise, unveiled its new Chromebook 11 G5 this week. The $189 device includes a touchscreen, which HP believes will prove very handy when Android app support comes to Chrome OS later this year. But while HP strengthens its support for Google's software, Dell is pulling away from it. Dell announced that it won't be launching any more Android tablets , focusing instead on Windows 10 2-in-1s - and it won't be delivering any more updates to the Android devices it's already sold. Another week, another massive security flaw discovered on Android devices. A security researcher discovered an exploit-chain that can be used on those devices equipped with Qualcomm chips, to compromise full disk encryption and expose all user data. Google is hoping to make programming easy and fun for kids to learn about , with its new 'Project Bloks' tool. For more experienced developers, Google made its Awareness APIs available via Google Play services , making it easier to create context- aware apps that intelligently react to users' locations and activities. A major update to Google Maps and Earth arrived this week too, allowing users to enjoy the view of our beautiful planet in greater detail than ever before. In May, Google invited the world to help choose the sweet, tasty name of its next major OS update. On Thursday, it announced the new name: 'Android N' will now be known as Android Nougat. The choice came as a surprise to some, as many had expected the company to go with another nutty option. Nougat - which will also be known as Android 7.0 - is coming this summer , but there are still hundreds of millions of devices that haven't been upgraded yet to Android 6.0 Marshmallow. But the Marshmallow update slowly continues to make its way to more devices, including many from Samsung. This week, it rolled out to the Galaxy Note 4 on T-Mobile US...... and to T-Mo's Galaxy Note Edge as well. The Galaxy Tab E 8.0 on Verizon also got a taste of that sweet Marshmallow goodness...... and the global Android 6.0.1 rollout also began for the Galaxy S5 Neo . HTC is believed to be developing two new Nexus handsets this year, which will be 'showcase' devices for Android 7.0 Nougat. Key specs of the new devices - which have the piscine codenames 'Marlin' and 'Sailfish' - were revealed this week. Pre-orders opened in the US for the new Moto G4 and G4 Plus , priced at $199.99 and $249.99 respectively. And British hardware brand Wileyfox launched three low-cost 'Spark' smartphones running Cyanogen OS 13 , based on Android 6.0 Marshmallow. The new handsets feature some pretty solid specs, despite their remarkably low price tags. OnePlus launched its new flagship-class handset last month - but it's been a frustrating experience for many of those who ordered the handset in the UK. The company blamed a "schedule misalignment" and "customs issues" for delays in orders of the OnePlus 3 arriving to British buyers. OnePlus attempted to appease irate customers by offering them vouchers - but some customers were then sent the wrong vouchers, and many who actually received the right vouchers found that they weren't working, or that they were still being charged for shipping, which the company had said they wouldn't have to pay for. As one of our readers put it, "it wouldn't be a OnePlus launch without a customer service nightmare". Adding to the woes of OnePlus fans in the UK, the company warned that it may have to increase prices , due to economic instabilities caused by the country's decision to exit the European Union. BlackBerry is believed to be working on three new Android handsets, details of which - including the image shown above - were revealed this week . Meanwhile, a report claimed this week that BlackBerry had ended production and sales of its BlackBerry 10 devices , ahead of its full transition to Android. However, the company later refuted that claim, and said that only its older BlackBerry Classic handsets had been discontinued. Now, from BlackBerry 10, to Windows 10...... as Microsoft backed down on its increasingly aggressive efforts to pressure owners of older Windows 7 and 8.1 PCs to upgrade to its newest OS. The company made changes to its upgrade prompt , making it much easier to opt out of installing Windows 10. However, some users will now be shown an even more intrusive full-screen notification - but many users won't see this prompt at all. By the way, if you haven't upgraded to Windows 10 yet, don't forget that Microsoft's free upgrade offer ends on July 29 . Microsoft ruined its own surprise a day early , but on Wednesday, it officially announced that the Windows 10 Anniversary Update will arrive on August 2. And in case you were worried, the update will also begin rolling out for Windows 10 Mobile devices on the same day. Microsoft also revealed that Windows 10 is now installed on over 350 million devices , eleven months after its launch. Microsoft released its latest cumulative update for Windows 10 PCs and phones on Wednesday, bumping the build number up to 10586.456. You can find out about the improvements in that update here. On Tuesday, Windows 10 Insider Preview build 14376 was released to the Fast ring , bringing improvements (and known issues) for PCs and for phones . It was followed on Thursday by the arrival of build 14379 to the Fast ring , and again, it included various fixes and issues for PCs and for phones . There was some good news for those on the Slow ring too, as build 14372 was released for PCs and phones . New Insider Preview builds have been coming at a relentless pace in recent weeks. But there won't be any more builds for a few days , as the development team is taking some time off to enjoy the long weekend ahead of Independence Day on July 4th. Microsoft announced the Windows Insider MVP Program on Friday - and some Insiders will have the opportunity to become MVPs early next year. Microsoft announced that its UK CEO will be stepping down on November 1. Its new chief executive, Cindy Rose, started work at the company on Friday. Microsoft UK has been stretching the definition of "special offers" for a while now, with deals often lasting for months at a time. After launching a 25% discount on its Band 2 in April, reducing its price to £149.99, the company extended that deal this week for another three months . Even more absurd is the "special offer" on its Lumia 640 XL, which has also been extended until the end of September. By that time, the deal will have been running for eight months. That's not a special offer; that's clearly a permanent price cut, and it seems a little disingenuous for Microsoft to be marketing it as some kind of extraordinary promotional deal. Long-term price cuts are common for companies clearing out stocks to make way for newer devices, which raises some interesting questions about the Band 2. Could this be the Band 3? Microsoft's was seen in a video with a wearable device that looks like the Band 2, but with some notable differences . Double tap to wake was a popular feature on various Lumia devices over the years, but it's still not available on Microsoft's Windows 10 Mobile flagships. Over seven months after their launch, it looks like Microsoft is finally getting around to adding that feature to its Lumia 950 XL. The uniquely stylish NuAns NEO, a high-end Japanese Windows 10 Mobile handset, is going global thanks to a Kickstarter campaign. The device, which supports Continuum, and can be customized with two separate removable backplates, is already available in Japan, and will begin shipping internationally in November. Meanwhile, AT&T has inexplicably resumed selling the Nokia Lumia 1520 - a device that first went on sale two and a half years ago, and which the carrier discontinued in April 2015. Even more astonishing is that the company is charging almost $600 for the less-than-new Windows Phone. Almost nine months after they were first announced - and six months after going on sale in North America - Microsoft launched its 1TB Surface Book and 4 in ten more markets on Thursday. For students who like to work hard and play hard, Microsoft is offering a new Surface + One Bundle in the US. The bundle is extensively customizable, and offers savings of up to $499 - even on some of the most affordable combinations. Microsoft released Visual Studio 2015 Update 3 on Monday, including Xamarin 4.1 and support for Apple tvOS development. It also delivered an update to Sway for Office 365 subscribers , adding some handy new features. Arrow Launcher 2.0 - a project for Android devices - also arrived on Thursday, adding Office 365 integration , among some other new features. And Microsoft also released a new version of Office for Android , adding a few small but welcome new features. There was an ugly surprise for Evernote users on Wednesday, as the company announced 33% price increases to its paid plans, and severe new limits for its free Basic plan , which will restrict users to syncing notes between just two devices. It's worth noting ( ha! ) that the new annual $69.99 price of the Premium plan is now the same as buying an Office 365 Personal subscription, which seems to offer a heck of a lot more for the same money. Microsoft updated its Groove music app on Windows 10 this week, with an extra splash of color and various other improvements. The update is currently available to those on the Windows Insider Fast ring. The company is also hoping to encourage more users to subscribe to its Groove Music Pass, by offering them three extra months of free service when they pay $9.99 for one month. After what seemed like an eternity in beta, Facebook officially launched its new Messenger app for Windows 10 Mobile on Thursday. The new VLC media player app launched for Windows 10 PCs and phones, and for HoloLens , on Wednesday. It's officially a beta for now, but it includes native Windows 10 features, such as Live Tiles and Cortana integration. Wells Fargo - one of America's biggest banks, and one of the world's largest companies - launched its new Windows 10 Mobile app , but it's ending support for Windows Phone 8.1 users. Indeed, owners of those older devices won't even be able to use its mobile banking site. PayPal killed off its Windows Phone 8.1 app on Thursday - but on Friday, it emerged that the company is working on a Windows 10 Mobile app . Microsoft's official Spanish Lumia Support account raised the hopes of Windows phone users this week, when it revealed that the company is "working with our friends at Snapchat" to bring the app to its devices. The information was corroborated later in the week by another Microsoft support agent. We'll have to wait and see if that information proves to be correct. But another big brand is turning its back on Windows phone users. The Amazon App for Windows handsets will stop working on July 25 , and the company says its customers should use its mobile site instead. There was a surprising addition to the Windows Store this week, as gaming platform Steam released its official authenticator app for Windows phones . But there was bad news for Xbox Fitness fans, as Microsoft announced that the service will be killed off next year . Microsoft announced that it will kick off the 'Ultimate Game Sale' next week , with huge savings available on Xbox and Windows games, and an extra 10% off for Xbox Live Gold subscribers. Microsoft announced the Xbox Play Anywhere program at the E3 gaming expo last month, which will allow gamers to purchase a title once and play it on both their Xbox consoles and Windows 10 PCs. The first game to support Play Anywhere is believed to be ReCore , which will be released on September 13. Microsoft shared details of some "record-breaking" stats from its Xbox showcase event at E3, where it unveiled the new Xbox One S, and teased its upcoming Project Scorpio console. And Microsoft also revealed that its $149.99 Xbox Elite controller has been a big hit - since its launch six months ago, over a million have been sold . Four new Xbox 360 games were added to the Xbox One backward compatibility list , which now includes well over 200 games. Microsoft subsidiary Mojang announced on Tuesday that Minecraft: The Movie will be released on May 24, 2019 , including availability in 3D and IMAX. And from one block-buster (sorry) movie to another: two years after it was first announced, it was confirmed this week that the Tetris movie will actually be a sci-fi trilogy. Its producer says that a trilogy was necessary "because the story we conceived is so big". And after almost three years, Nintendo emulator Dolphin finally got a long- awaited update with a wide range of improvements, bumping it up to version 5.0. But our journey around the tech world ends this week in Florida, where a man is suing Apple, claiming that its iPhone, iPad and iPod are "substantially the same" to his 1992 device concept drawings. The lawsuit accuses Apple of 'dumpster diving' for product ideas, and despite his patent application being declared abandoned in 1995 after he failed to pay the application fees, the man is demanding at least $10 billion in damages from the company , along with "a reasonable royalty" of 1.5% of all of Apple's future sales. Good luck with that... Before we wrap things up for another week, let me first highlight a few extra bits around the site that I hope you’ll enjoy reading. Rich Woods published his review of the Huawei MateBook , a stylish Windows 10 tablet with a range of accessories, which was evidently inspired by Microsoft's Surface Pro tablets. Shreyas Gandhe shared his review of the Star Cloud PCG02 , a Windows 10 PC stick that plugs into a TV or monitor. There's clearly a lot to like about the low-cost device - but there's also plenty of room for improvement. And after Microsoft announced this week that Windows 10 has now been installed on over 350 million devices, I took a closer look at the company's progress so far. It said last April that Windows 10 will be installed on a billion devices within three years - but how likely is it to reach that goal? Stay tuned to Neowin in the days ahead for what’s sure to be another exciting week, filled with official news, exciting updates and plenty of insights from around the world of technology. For now, though, there’s plenty more to read across the site – including loads of interesting discussions over on our forums . 2016-07-02 11:00 Andy Weir

44 Uber App Update To Track Driver Behavior Uber soon will know the answer to a question raised by bumper-stickers on many vehicles traveling America's highways: "How am I driving? " In a forthcoming update to the app used by Uber drivers, the transportation platform company has implemented safety telematics that measure the braking, acceleration, and speed of the vehicles used by its drivers. The update also adds notifications designed to promote better driving, like reminders to take breaks and to mount the phone used for the driver app on the dashboard rather than keeping it in-hand. It adds daily driving reports that compare driving patterns to those of other drivers. The update coincides with the approach of the Fourth of July in the US, a holiday consistently marred by driving fatalities. Uber says its driver app improvements can help reduce driving risks. "Today too many people are hurt or killed on the roads," wrote Uber chief security officer Joe Sullivan and MADD national president Colleen Sheehey-Church in a blog post on Wednesday. "While alcohol is the leading cause of traffic crashes, there are other behaviors that can put people at risk -- for instance if drivers are on drugs, haven't gotten enough sleep or are distracted. " Data can help Uber drivers operate more safely. But it also helps Uber defend itself against competitors that would see the company hobbled by regulation, and against critics who claim the company's business practices are unsafe. To counter its detractors, the company published data showing a correlation between declining DUI arrests and Uber usage. The Atlanta police department, the company said, reports that arrests for drunk driving fell from 2,243 to 1,535 between January 1, 2010 and January 1, 2016 -- a 32% decline. During that period, Uber pickups surged, suggesting a possible correlation. Uber is careful to avoid claiming credit for the DUI arrest decline, because correlation is not causation. But Sullivan and Sheehey-Church said in their blog post that Uber riders see a link between the service and reduced drunk driving. Certainly there's some interaction there. With the addition of telematic data about driver behavior, Uber should be able to make an even stronger case about safety of its service, particularly compared to other transportation options that may not have drivers under comparable surveillance. Access to a broader set of data about how its drivers actually drive will allow the company to identify risky drivers and to correlate rider complaints with real measurements of vehicle braking, acceleration, and speed. Uber began tracking driver behavior in Houston last November, according to The Wall Street Journal. The company says it plans to introduce the new telematics features in its driver app in 11 cities over the next few weeks. It is, however, not the first business to collect information about its drivers. Fleet management companies have been collecting telematic data for years. More recently, insurance companies like Allstate have begun offering a rate discount for drivers who accept telematic monitoring. In the years ahead, such technological oversight is likely to become difficult to avoid, because theoretical privacy risks will have trouble competing with the prospect of saved lives. [Read Google, Uber, Ford Form Self-Driving Car Coalition .] Studies indicate that telematics lead to better driving. The SAMOVAR research program conducted in Europe, for example, found that simply recording vehicle data led to a 28.1% decrease in the accident rate over a 12-month period, a result attributed to driver awareness that behavior can be checked. There's a potential downside for Uber, however. By amplifying its capacity for driver oversight, Uber runs the risk of making its contract drivers look like employees to government regulators. Uber recently settled a challenge to its classification of drivers as independent contractors, thereby avoiding a judicial ruling on the issue. But that does not preclude future litigation, and being able to exercise control over how work is done -- how drivers drive -- is one of the factors the IRS considers when evaluating whether a worker is an employee or an independent contractor. 2016-07-02 10:06 Thomas Claburn

45 Most organizations don't have an IT security expert With cyber attacks on the rise, organizations are facing pressure to beef up their security to avoid falling victim to such an attack. However, a recent IT security report from Spiceworks shows that 80 percent of organizations were affected by at least one security incident during 2015. To compile its report, the company surveyed over 600 IT professionals from the US and UK. Shockingly, Spiceworks discovered that few organizations have either an in-house or third-party cyber security expert on call. According to the survey only 29 percent of organizations have such an expert working in their IT department and 23 percent contract outside experts to handle security situations. However, 55 percent of the organizations surveyed said that they do not have regular access to in- house or third-party IT security experts. Spiceworks also found that the number of IT professionals with security certifications is quite low. The company polled over 1,000 IT pros in regard to their cyber security credentials and 67 percent admitted to not having any security certification at all. Seventeen percent however do have the basic CompTIA Security+ certificate which many believe is essential for job interviews and is not paid for by their organization. The report made it clear that cyber security is a business priority and should be for modern businesses. However, according to the IT pros surveyed, 73 percent say it is a priority for the CIO and senior IT leaders and over half say that it is up to the CTO and CEO to prioritize security. Despite the rise of cyber attacks, few organizations see investing in IT training as something worthwhile; only 18 percent of employers are very open to pursue and encourage their employees to receive more training. If organizations expect to flourish in today’s world, then closing the cyber security skills gap is a must and more importance needs to be placed on IT security training and more funds need to be allocated for it as well. Published under license from ITProPortal.com, a Net Communities Ltd Publication. All rights reserved. Photo Credit: Wichy / Shutterstock 2016-07-02 07:53 By Anthony

46 Symantec patches over twenty products after Google discovers zero-day flaws These days, users can be lulled into a false sense of security once they have installed an antivirus or anti- malware solution on their computers. While a number of vendors, including Symantec, release daily updates to malware definitions, this still allows brand new exploits to be leveraged until they are discovered and indexed. Unfortunately, like any other software installed on a computer, anti-malware solutions can provide an additional vector for malicious parties to compromise a system. Google Project Zero security researcher Tavis Ormandy recently discovered "multiple critical vulnerabilities" in addition to "many wormable remote code execution flaws" in a wide range of Symantec software. In his update on the Project Zero website , Ormandy said that: One of the critical vulnerabilities targeted a bug in the unpacker used by the core scan engine found in all Symantec and Norton branded products. While Symantec's unpacker contained code which had been derived from open source libraries, Ormandy criticized the company for not updating them "in at least 7 years. " As a result, Symantec's software was affected by dozens of public vulnerabilities, some of them with published exploits. The full list of affected software can be found on the Symantec website , with notable consumer-oriented mentions including, but not limited to: While the discovered flaws have already been patched and updates distributed automatically where available, it may be worth your while to manually verify that your software is up to date. The news comes two months after Symantec made a beta version of Norton Security Premium 2017 available to the public . Source: Engadget 2016-07-02 07:50 Boyd Chan

47 Windows 10 Dominates PC Gaming, New Steam Data Shows The 64-bit version of Windows 10 was already the number one platform on Steam since a few months ago, but the operating system keeps growing at an incredible pace and is very close to reaching the 50 percent threshold by the end of the summer. Last month, for instance, Windows 10 improved its share by 3.26 percent on Steam, reaching no less than 42.94 percent Windows 7 64-bit, which is currently the runner-up in these charts, lost 1.64 percent to eventually drop to 30.61 percent, so the gap between the two increases with every new month. Windows 8.1 64-bit is also declining, with June 2016 data showing a drop of 1.01 percent to 10.07 percent. The 32-bit flavor of Windows 7 collapsed by 0.41 percent to 6.36 percent. Windows 10 was the only version to increase its Steam share last month and both the 32-bit and the 64-bit flavors gained new points last month. On the other hand, all the other versions of Windows lost ground, most likely as everyone moved to Windows 10 thanks to the gaming improvements that Microsoft implemented in this version, including DirectX 12 support. As far as non-Windows platforms are concerned, there’s still no reason for Microsoft to worry about gaming competition on the PC. Mac OS X has 3.60 percent of the Steam market and version 10.11.5 is the only one that improved its share last month, growing 1.13 percent. As for Linux, there’s a new small decline, this time by 0.04 percent, and Ubuntu 16.04 is the only one that improved by 0.01 percent. Windows continues to lead the market for gaming and this new Steam figures show an increase 0.08 percent, so Microsoft’s desktop OS is now at 95.50 percent. 2016-07-02 07:09 Bogdan Popa

48 Bioinformatics software developed to predict effect of cancer-associated mutations: Software analyzes 40,000 proteins per minute - - ScienceDaily That is why the three researchers began to work on the bioinformatics tool. So José Antonio Rodríguez had the biological question; Asier Fullaondo, the knowledge of bioinformatics tools and databases; and Gorka Prieto, the programming skills. Initially, these PhD holders developed a piece of software (WREGEX, available for the scientific community on the UPV/EHU's server) that can be used to predict and automatically seek out 'functional motifs' (the small groups of amino acids that develop specific tasks in a protein). They tested the programme to predict 'motifs' that move a protein from the nucleus to the cytoplasm of a cell, the so-called 'nuclear exportation signals'. At the end of this research phase in 2014, a paper was published in the journal Bioinformatics. But, as José Antonio Rodríguez pointed out, "in research the answer to one question opens the door to more questions. " The question on that occasion was: Which proteins in a sequence of amino acids could have a functional cancer-mutant 'motif'? The team took another step and combined the information on the sequences of all known human proteins with the COSMIC catalogue that gathers the mutations linked to cancer. Thus appeared a new version (WREGEX 2.0) that allows a normal protein to be compared with the same mutant one so as to be able to predict 'functional motifs' that have been modified and which could be linked to cancer. "You may also have experience in how a motif functions and you want to find out which proteins it could appear in and whether it appears modified into cancer. With this software you can obtain candidates to start to study," explained Gorka Prieto. Once the bioinformatics programme had been developed, it had to be tested and to do this they carried out a "cell exportation trial'. They again chose various candidates that could constitute a motif responsible for moving the protein outside the cell nucleus. They checked their functioning and, after modifying them according to the tumour mutations described in the COSMIC catalogue, they ran the trial again. That way, they certified that the candidates acted as an 'exportation signal', that the mutation affected the way they worked, and that the software was therefore valid. So this tool combines three types of information: the protein sequences, the functional motifs and the cancer mutations. "One of the main features of WREGEX 2.0 is that it can simultaneously study highly complex proteomes with masses of proteins and combine information, in the case of the trial, with cancer mutations; but the door is open for using other databases containing information about other types of mutations. The advantage, moreover, is that 40,000 proteins a minute can be analysed, while with other programs the analysis of a single protein took several minutes," explained Asier Fullaondo. So with this software it is possible to predict that the alteration in a protein may influence the development of disease, not just cancer. So far, thirteen pieces of research have already used this computing tool. Researchers in China, Japan, Korea, Germany and the United States have accessed the server. In the meantime, the multidisciplinary tandem formed by the three PhD holders is already thinking about continuing with the work to improve the tool. 2016-07-02 13:36 feeds.sciencedaily

49 Internet attacks: Finding better intrusion detection -- ScienceDaily Boldly trying a massive number of user name and password until you have that unique combination: that is an example of a 'brute force' Internet attack. Once having gained access to the user's computer, it can, in turn, be used for spreading illegal content or for performing a DDoS attack. Without knowing, users turn into attackers this way. This type of attacks take place via web applications that are relatively vulnerable, like WordPress or Joomla, but also using the Secure Shell (SSH) which enables remote login to a device. Check the contents of the data coming in, analyze network traffic and log files on every single computer: that's the classic approach. Flow based According to Rick Hofstede, this implies analyzing a vast amount of data that will never have effect. Within a network of a larger organizations, with probably tens of thousands of computers, smartphones and tablets connected, it will soon be impossible to check every device. Hofstede therefore chooses a 'flow based' approach: he looks at the data flow from a higher level and detects patterns. Just like you can recognize advertisement mailings without actually checking the content of the brochures. Major advantage is that this detection method can take place at a central spot, like a router taking care of traffic. Even if the number of devices connected to this router is growing rapidly -- and that will undoubtedly happen with the introduction of 5G and Internet of Things -, the detection can be scaled up easily. By zooming in on attacks that have effect, i.e. that lead to a 'compromise' and require action, Hofstede further narrows his analysis. Multiple attacks from the same sender can also be recognized in this way. Hofstede did not just test his methodology inside the lab, he made his 'SSHCure' software available open source, for Computer Emergency Response Teams of several organisations. His method proves to be effective, and diminishes the number of incidents, with detection accuracies up to 100% -- depending on the actual application and the type of network, for example. Future, more powerful routers will be able to perform the detection themselves, without the need of extra equipment, Hofstede expects. 2016-07-02 13:36 feeds.sciencedaily

Total 49 articles. Created at 2016-07-02 18:00