Articles

51 articles, 2016-07-03 00:01

Samsung to Unveil Galaxy 1 S7 Edge Olympic Edition on July 7 (1.02/2)

Images of Samsung’s Galaxy S7 Edge Olympic Edition leaked online last month, revealing the design of the phone, to go along with rumors on its specs. It was known that Samsung intends to unveil the smartphone well before the Rio 2016 Olympic Games, which start on August 5 and end on August 21.

This means that the Olympic Edition should have a 5.5-inch Super AMOLED display, Snapdragon 820 or Exynos 8890, depending on the region. Rear camera capacity reached 12MP, with 5MP in the front. Battery capacity was 3600mAh.

It is also said that the phone will come with some custom wallpapers and possibly a theme related to the Olympics Games. Samsung could also offer special e-coupons and discounts.

Details on price and availability aren’t available yet, but the smartphone is expected to more widely available, unlike some of Samsung’s other special editions. Either way, Samsung will be providing many more details during next week’s unveiling. 2016-07-02 14:28 Alexandra Vaidos

Windows 10 Anniversary 2 Update makes great strides for accessibility (1.02/2)

Accessibility options are not a new feature for Windows, but the upcoming Windows 10 Anniversary Update includes even more than before. This week it was confirmed that the update will launch on 2 August , just days after the free upgrade period ends (although it's worth noting that people with accessibility needs will still qualify for a free upgrade after this date). If you've been testing out the insider builds of Windows 10, you may well have noticed accessibility improvements, but now it is only a matter of weeks before they are made available to everyone. In a blog post rounding up what's been added over the last year, Microsoft also reveals the latest additions.

The blog post serves as a handy reminder of just how much has been done to make Windows 10 as accessible as possible. But development is not stopping. Microsoft has added a new AutoSuggest feature which provides verbal cues about suggestions to searches that are conducted.

Windows 10 Anniversary Update is not just about making Windows itself more accessible. Microsoft has also introduced a number of features and tools for developers that make it easier to integrate such options into apps.

Here are the full details of Microsoft's accessibility advancements:

Microsoft also says that accessibility improvements have been made to Edge, Mail, Cortana and Groove.

Photo credit: Stanislaw Mikulski / Shutterstock 2016-07-02 13:07 By Mark

Tesla is being investigated 3 in wake of first autopilot- related death (1.02/2)

The National Highway Traffic Safety Administration (NHTSA) has launched a “preliminary evaluation” into an accident involving a 40- year-old man that was killed while his Tesla Model S was in autopilot mode.

This is the first known fatality in more than 130 million miles where autopilot was activated, Tesla said.

According to the Levy Journal , the accident took place on May 7 in Williston, Florida. The victim, identified as Joshua Brown, was reportedly an active member of the Tesla subreddit. Roughly two months ago, a video of his Model S autopilot avoiding a crash went viral and was even tweeted by Tesla CEO Elon Musk.

In a statement on its website, Tesla said Brown had a loving family and was a friend of both Tesla and the broader EV community.

According to Tesla, the vehicle was traveling on a divided highway with autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky. As such, the brakes were never applied.

The car essentially drove “under” the gap of the trailer, making contact with one of its weakest points: the windshield. Had the accident involved the front or the rear of the trailer, Tesla said, its crash safety system would likely have prevented serious injury.

Tesla extended its deepest sympathies to Brown's friends and family. 2016-07-02 19:37 Shawn Knight

4 HTC Nexus Marlin Spotted on Geekbench (1.00/2)

HTC Nexus Marlin was spotted on Geekbench , according to leakster Roland Quandt. The post reveals some specs of the upcoming smartphone, mainly that it will be powered by a quad-core 1.6GHz processor and 4GB of RAM. The new Nexus will also come with Android Nougat 7.0 out of the box, as Google is expected to release the new Android version and the Nexus smartphones about at the same time. Last year, Google unveiled the Nexus 5x and Nexus 6P in September and the devices went on sale a month later. This would indicate that the company might keep last year’s scheduling and unveil the Nexus phones in the same month of this year, but that’s just speculation at this point. 2016-07-02 15:02 Alexandra Vaidos

IBM to Buy EZSource to 5 Help Developers Modernize Mainframe Apps (0.01/2)

IBM will acquire EZSource to help customers modernize mainframe apps for migration to hybrid cloud as part of digital transformation efforts.

The mainframe is not dead, and IBM is doing its part to ensure that big iron will not be going anywhere for some time.

Big Blue announced on June 1 its intent to acquire EZ Legacy ( EZSource ), an Israel-based application discovery company, to help developers easily understand and change mainframe code based on data displayed on a dashboard and other visualizations. The acquisition is expected to close in the second quarter of 2016. Financial terms of the deal were not released.

The ability to quickly evaluate existing enterprise applications and modify them represents a critical piece of the digital transformation puzzle. IBM is intent on helping its customers transform their organizations for the digital age by gaining value out of their mainframe assets. The company will integrate its expertise in hybrid cloud, the API economy and DevOps with EZSource's mainframe application modernization technology.

EZSource provides a visual dashboard that shows developers which applications have changed to ease the process of modernizing applications, exposing APIs and opening up development resources.

IBM's decision to acquire its long-term partner EZSource is largely driven by the fact that the digital transformation and API economy is estimated at being a $3.75 billion market, and to capture some of this share, companies must first understand and modify legacy mainframe software to be at the center of their digital enterprise, IBM said in a post by Mary Hall, an IBM marketing and social media staffer, on the IBM Mainframe Insights blog .

"The mainframe is the backbone of today's businesses," said Ross A. Mauri, general manager of IBM z Systems, in a statement. "As clients drive their digital transformation, they are seeking the innovation and business value from new applications while leveraging their existing assets and processes. "

Combining EZSource's offerings with IBM's will obviate the need for developers with specialized skills handling processes that previously were manually intensive, Mauri noted.

IBM's API management solutions, including z/OS Connect and IBM API Connect , integrated with EZSource's technology will help connect services from core mainframe applications to new mobile, cloud, social and cognitive workloads, IBM said.

"While they have always been highly exaggerated, rumors of the IBM mainframe's death continue to circulate," said Charles King, principal analyst at Pund-IT. "The platform's notable longevity and success are due to numerous factors but first and foremost has been IBM's efforts to continually evolve mainframe technologies and make them relevant for new business processes and use cases. "

King said this deal should add a few more years to the mainframe's "remarkable" life.

Meanwhile, IBM's DevOps offerings, such as IBM Application Delivery Foundation for z Systems and IBM Rational Team Concert , will combine with the EZSource software to help developers migrate legacy mainframe apps faster.

IBM said EZSource provides developers with information about which sections of code access a particular entity, such as a database table, so they can easily check them to see if updates are needed. Without the advanced analytics in the EZSource solution, developers would need to manually check thousands or millions of lines of code to find the ones that need to be changed.

EZSource delivers three key products:

-- EZSource:Analyze , which provides a graphical visualization of application discovery and understanding for developers and architects; -- EZSource:Dashboard , which offers multiple categories of application metrics for managers and executives; and

-- EZSource:Server , which integrates with third-party source code management, workload automation and CMDB tooling systems to provide application to infrastructure mapping.

"The subtext of IBM's purchase of EZSource is the critical importance of reconciling and integrating new mobile and social apps with traditional backend 'systems of record'— particularly the IBM mainframes residing in major banks and financial organizations that power 30 billion business transactions every day," King said.

Supporting and streamlining the integration process is crucial for IBM and its customers since failure could cripple emerging processes, like smartphone "pay" apps, he added.

Meanwhile, enabling a newer generation of developers to support the mainframe has been Compuware's mission for the last several years. Tools that provide deep application understanding via visualization enable both mainframe and non-mainframe developers to manipulate mainframe data and implement code changes, faster and with fewer mistakes, Compuware said.

"As businesses increasingly compete via digital means and the mainframe serves as a back-end server for mobile and web front ends, development teams must keep pace with the requirements of modern application delivery," Chris O'Malley CEO of Compuware, told eWEEK. Compuware's Topaz product has offered visualization tools since January 2015, he said. 2016-07-02 19:37 Darryl K

Powerful tool predicts 6 wave behavior at all depths of sea -- ScienceDaily (0.01/2)

The waves we see at a surface, at full sea or at the coast, consist of numerous other waves at a range of depths. From the deepest ocean waves with a long wave move at high speed, while the waves we see at the surface are short waves moving slower and differ from the deep sea waves in shape and altitude. Joint action

It is complicated to capture all these changes in mathematical models, therefore often some kind of approximation is chosen. This holds, for example, for dispersion: the relationship between wave length and wave speed. Kurnia does not use an approximation but the exact relationship. He doesn't choose a numerical approach, that uses strongly simplified equations for a series of times. Instead, he wrote an accurate description of the combined action of the wave at different depths, using the kinetic energy.

Fast calculation

Thanks to this, the model is applicable for any water depth. Furthermore, Kurnia is capable of introducing abrupt changes: a quay, a sloping coastline, a ship. Despite the added complexity, the models can be calculated very fast -- minutes instead of days -- by using the so-called Fast Fourier Transform, decomposing any mathematical description in several sinus waves.

Kurnia's model calculations have already been compared with the many experiment in 'wave tanks' of the Technical University of Delft, MARIN in The Netherlands and the Indonesion Hydrodynamic Laboratory. The models are also very useful to make precalculations of the desired wave in the thank, thus reducing the expensive experimenting hours. Via LabMath Indonesia, Kurnia's software is available named HAWASSI: Hamiltonian Wave- Ship-Structure Interaction.

Ruddy Kurnia (Bandung, 1987) did his PhD research in de Applied Analysis group (faculty of Electrical Engineering, Mathematics and Computer Science EEMCS). His supervisor is Professor Brenny van Groesen. The research had financial support of the Dutch Technology Foundation STW. Kurnia continues working on the models, partly as a postdoc researcher in Twente, partly in his home country Indonesia 2016-07-02 19:37 feeds.sciencedaily

Robots get creative to cut through clutter: Algorithm 7 balances 'pick and place' with 'push and shove' -- ScienceDaily

The software not only helped a robot deal efficiently with clutter, it surprisingly revealed the robot's creativity in solving problems. "It was exploiting sort of superhuman capabilities," Siddhartha Srinivasa, associate professor of robotics, said of his lab's two-armed mobile robot, the Home Exploring Robot Butler, or HERB. "The robot's wrist has a 270-degree range, which led to behaviors we didn't expect. Sometimes, we're blinded by our own anthropomorphism. "

In one case, the robot used the crook of its arm to cradle an object to be moved.

"We never taught it that," Srinivasa added.

The rearrangement planner software was developed in Srinivasa's lab by Jennifer King, a Ph. D. student in robotics, and Marco Cognetti, a Ph. D. student at Sapienza University of Rome who spent six months in Srinivasa's lab. They will present their findings May 19 at the IEEE International Conference on Robotics and Automation in Stockholm, Sweden.

In addition to HERB, the software was tested on NASA's KRex robot, which is being designed to traverse the lunar surface. While HERB focused on clutter typical of a home, KRex used the software to find traversable paths across an obstacle-filled landscape while pushing an object.

Robots are adept at "pick-and-place" (P&P) processes, picking up an object in a specified place and putting it down at another specified place. Srinivasa said this has great applications in places where clutter isn't a problem, such as factory production lines. But that's not what robots encounter when they land on distant planets or, when "helpmate" robots eventually land in people's homes.

P&P simply doesn't scale up in a world full of clutter. When a person reaches for a milk carton in a refrigerator, he doesn't necessarily move every other item out of the way. Rather, a person might move an item or two, while shoving others out of the way as the carton is pulled out.

The rearrangement planner automatically finds a balance between the two strategies, Srinivasa said, based on the robot's progress on its task. The robot is programmed to understand the basic physics of its world, so it has some idea of what can be pushed, lifted or stepped on. And it can be taught to pay attention to items that might be valuable or delicate, in case it must extricate a bull from a china shop. One limitation of this system is that once the robot has evaluated a situation and developed a plan to move an object, it effectively closes its eyes to execute the plan. Work is underway to provide tactile and other feedback that can alert the robot to changes and miscalculations and can help it make corrections when necessary. NASA, the National Science Foundation, Toyota Motor Engineering and Manufacturing and the Office of Naval Research supported this research. 2016-07-03 00:01 feeds.sciencedaily

Security Think Tank: 8 Practical biometrics in the enterprise

When it comes to biometrics in enterprises, two seemingly contradictory things are true. First, biometrics have been around for a long time; and second, adoption has been lower than many expected given that long history.

This could be changing, though. The integration of biometrics into consumer technologies (for example, smartphones), coupled with the steady decline in cost and the increase in sophistication of readers could help biometrics gain traction.

Time will tell if this is the case or not, but in the interim, biometrics can be a valuable tool in an enterprise’s security toolbox. Judiciously applied, biometrics can help satisfy certain security goals.

It’s helpful for enterprises to have an understanding of the possible advantages (and, yes, the disadvantages) of biometrics from a security perspective as they consider possible usage.

Let’s take a look at some of these as well as how (as a practical matter) security pros might incorporate biometrics into their overall security programme.

It is important that enterprises first understand that biometrics change both the security properties and the threat model when they are used. There are a few ways in which this happens. 2016-07-02 19:37 Ed Moyle

IBM Launches NYC Bluemix 9 Garage With Former Azure Exec

Led by a former Microsoft executive, IBM opened a new Bluemix Garage in New York City to focus on blockchain, fintech and financial services.

Shawn Murray was perfectly happy as the senior director of Azure digital sales at Microsoft, as he had been with the company for 18 years—continually moving up the ladder.

Then Murray got a glimpse of what IBM was doing with its Bluemix platform and its outreach to developers, and he decided to make a change. Murray joined IBM as worldwide director of Bluemix and Blockchain Garages.

Speaking with eWEEK this week about the launch of IBM's latest Bluemix Garage in New York City, Murray said it was IBM's focus on design in addition to its cloud and developer focus that won him over. Steve Robinson, general manager of IBM Cloud Platform Services, who had a key role in establishing the IBM Bluemix Garages, helped recruit Murray away from Microsoft—where he'd spent the last seven years leading Azure sales both in the enterprise and the ISV spaces.

"I was pretty happy with my role at Microsoft, but once he told me about the garages and I started digging into what they do here, I knew this was the place for me," Murray said in an interview. "Because what they've done at IBM is truly magical. "

Murray said IBM has combined the technical capabilities and the roles of the developers and the architects with designers who have Ph. D.'s in psychology and design thinking, and it has built this entire method around how to build apps in an innovative way.

"Microsoft just didn't have that," he said. "They could help you build an app. But IBM's difference is that whole process and the design thinking. "

The design element is what made a difference for Murray. For example, one of the Bluemix Garage engagements Murray sat in on was a small startup out of San Francisco that had a complete idea and knew exactly what it wanted to build. IBM had the company come to the garage for a design thinking workshop to help it visualize what it was trying to solve and what experience it wanted its end users to have. And the design workshop, the startup abandoned the idea it initially had because it realized that what it was trying to build wasn't really what it was trying to solve.

"That's the differentiator for us," Murray said. "We have the combination of designers who can think through challenges. Not just visual designers, but experience designers, and we bundle that with architects and the developer assets that we have. So for me personally, it was just a perfect fit. "

IBM is indeed serious about design. In April, Big Blue established a new Distinguished Designer program and placed it on the level of the company's 2-decade-old Distinguished Engineer program. IBM recognizes design as a technical craft that is as critical as engineering to the long-term success of the company and a key driver of value for its customers, said Fahad Osmani, talent director for IBM Design.

IBM Design is three years into its mission of driving a culture of design within the company. The company has built what it claims is the world's largest design team, with 1,250 designers and 29 design studios around the world. Designers work on multidisciplinary teams on IBM products; digital engagement platforms for customers via the company's digital agency, IBM Interactive Experience (IBM iX); and branding and marketing initiatives. 2016-07-02 19:37 Darryl K

SmartBear Collaborator 10 10 Enables Collaboration Across Dev Teams

SmartBear Collaborator 10.0 improves code collaboration with integrations for Microsoft Visual Studio, Microsoft Word and IBM Rational Team Concert. SmartBear Software , a provider of software quality tools for developers, announced the release of a major revision of its developer collaboration tool, Collaborator 10.0.

SmartBear's Collaborator enables development, testing and management teams to work better together to produce high quality code, applications and services. The tool enables users to review user stories and requirements, review code, communicate within and across teams, and deliver quality software all from one interface.

The new release introduces Community, Team and Enterprise editions to better serve software development teams of all sizes. In addition, SmartBear Collaborator provides integration with Microsoft Visual Studio, IBM Rational Team Concert and the ability to review Microsoft Word and Adobe PDF files.

"This major release of Collaborator 10.0 delivers new functionality including full integration with Visual Studio, which makes review creation and participation very easy," said Justin Collier, product owner for Collaborator at SmartBear, in a statement. "This leads to higher review participation across development teams and ultimately more reliable and higher quality products. "

Collier noted that the new Team version of Collaborator extends the product to organizations that do not need require all of the functionality of Collaborator's more comprehensive Enterprise edition. The Enterprise edition is aimed at organizations that require scalable and customizable code and documentation team review, he said.

Collaborator also integrates with a host of other version control systems and integrated development environments (IDEs), including Gut, Subversion, Microsoft team Foundation server, Perforce, GitHub, Eclipse, and bug tracking systems such as Jira and BugZilla.

SmartBear is demonstrating SmartBear Collaborator 10.0 at the Visual Studio Live conference this week in Cambridge, Mass.

Meanwhile, SmartBear also released the results of a recent survey on the benefits of doing code reviews. In the survey of more than 600 developers, testers and IT operations professionals, 90 percent said the biggest benefit of code reviews is improved software quality. Seventy-two percent said their biggest benefit was sharing knowledge across teams, and 59 percent said enhanced maintainability of code was big for them.

Last month, SmartBear announced it had acquired CrossBrowserTesting, an automated cloud testing platform. The acquisition enabled CrossBrowserTesting to further accelerate and scale its Web and mobile cloud testing solution using SmartBear's global resources. Financial terms of the deal were not disclosed.

At the time of the acquisition, SmartBear said CrossBrowserTesting had more than 200,000 users, with more than 5 million tests run to date. CrossBrowserTesting provides a cloud testing environment with more than 1,500 mobile and desktop browsers in more than 65 operating systems, including iOS, Android and Windows.

"CrossBrowserTesting offers customers an easy way to use a cloud service for testing applications written for browsers and real mobile devices," said Doug McNary, CEO of SmartBear, in a statement after the acquisition. "It has continued to win the trust of customers by building a reliable, affordable and easy-to-use automated testing cloud platform. "

McNary said SmartBear intends to maintain CrossBrowserTesting as a standalone service, while providing the resources and investments necessary to help CrossBrowserTesting build up its business operating as a standalone entity inside SmartBear. 2016-07-02 19:37 Darryl K

11 A Platform for All Developers

Microsoft continues to enhance its. NET platform to support any developer writing any application on any platform. To that end, the software giant recently held dotnetConf , a virtual conference focused on. NET and where the general-purpose platform is going.. NET has several key features that are attractive to many developers, including automatic memory management and modern programming languages, and that make it easier to build high-quality apps more efficiently. During dotnetConf, Microsoft announced that. NET Core 1.0 will be released to manufacturing (RTM) on June 27.. NET Core is a cross-platform implementation of. NET that runs on Windows, with ports in progress for , OS X and FreeBSD. Also, during the dotnetConf event, Xamarin announced a new stable release of the Xamarin Platform, which co-founder and CTO Miguel de Icaza said features the biggest and best release of Xamarin Studio yet. It has a type system that is now powered by Roslyn, Microsoft's open-source. NET compiler platform. This eWEEK slide show takes a look at some of the things Microsoft presented and where. NET is headed. 2016-07-02 19:37 Darryl K

IBM Enhances Support for 12 the Swift Programming Language

At WWDC, IBM extended its already-considerable support for the Swift programming language, particularly for using Swift for server- side development.

Apple's Swift programming language continues to gain popularity among developers and IBM, as a key Apple partner, is putting its considerable might behind the technology.

This week at Apple's Worldwide Developer Conference (WWDC), IBM announced new tooling and support for Swift, along with updates on the uptick in momentum Swift has seen at IBM and its developer community. IBM has been creating mobile applications for its MobileFirst for iOS platform using Swift, but the company also is making strides in extending Swift for server-side development.

"From IBM's perspective , Swift on the server is already a global phenomenon," John Ponzo, an IBM fellow and vice president and CTO for IBM MobileFirst, wrote in a blog post. "This month, the number of code runs in the popular IBM Swift Sandbox topped 1.5 million. "

If you are not familiar with the Sandbox, it's a cloud environment IBM made public last December with the Swift.org launch, Ponzo said. At the time, IBM announced it would be participating in the new project to help extend Swift to the server and the company used its sandbox to test its code and shared access with others.

"This enabled developers, regardless of OS, who were interested in server- side Swift to give it a try without needing to stand up their own server," Ponzo said.

At last year's WWDC, Apple announced plans to open-source Swift and delivered it to the community last December. This week, the Swift.org community launched the first preview of Swift 3.0 .

Calling Swift "a game changer for enterprises," Phil Buckellew, vice president of Enterprise Mobile for the IBM Software Group, said IBM is the first cloud provider to enable the development of applications in native Swift.

"IBM has experienced the benefits of Swift on the cloud first-hand, and we are one of the largest digital agencies using Swift today with more than 100 enterprise apps developed in the language," Buckellew said in a blog post .

Adding to its potent support for Swift, IBM offered up two new capabilities. One is IBM Cloud Tools for Swift.

IBM Cloud Tools for Swift, a free app also known as ICT, provides Mac users with a simple interface for deploying, managing and monitoring end-to-end Swift applications, Brian White Eagle, an offering manager in the Mobile Innovation Lab, said in a blog post .

"The application integrates with tools designed by IBM Swift engineers to easily get started writing Swift on the server," White Eagle said in his post, which is a step-by-step guide for getting started with ICT.

IBM Cloud Tools for Swift simplifies the management and deployment of server-side assets, he said. It is a Mac Application that enables developers to group client-side and server-side code written in Swift; deploy the server-side code to IBM's Bluemix cloud platform and then manage projects using ICT.

Buckellew explained that for some Swift developers the key to productivity is working in the Xcode environment on a Mac. ICT simplifies the management and deployment of server-side assets in an environment complementary to Xcode.

"The developer experience is important to us, and we think developing Swift apps on the cloud should be simple and fast," he noted. IBM also announced Swift on LinuxONE , IBM's Linux-based mainframe servers. Developers are now able to use Swift on LinuxONE , Buckellew said.

"The safety, speed and expressiveness of Swift are now available with a level of performance and scale unmatched by any previous platform," he noted. "Having Swift on LinuxONE allows developers to do fit-for-purpose placement of workloads that need access to data in a high-performing, secure, reliable and scalable environment. "

Also, the IBM Swift Sandbox is now enabled with a beta driver of Swift on LinuxONE.

IBM introduced its Kitura Web Framework as an open-source technology in February at its InterConnect 2016 conference. Kitura enables the development of back-end portions of applications for Swift. Written in Swift, Kitura enables both mobile front-end and back-end portions of an application to be written in the same language, simplifying modern application development.

Buckellew cited the example of City Furniture, an IBM customer that used Swift for both client-side and server- side development. The furniture retailer created a mobile solution in just six weeks that enabled the company to transform clearance merchandise from a cost-recovery to a profitable product segment, he said.

"City Furniture recreated 90 percent of the functionality of their previous API with IBM's Swift server-side development packages using Kitura in a fraction of the time," Buckellew said. Meanwhile, for its part, Apple this week announced Swift Playgrounds , a new app for the iPad that is designed to make learning to code in Swift easy and fun for beginners. Apple delivered a preview release of Swift Playgrounds at WWDC as part of the iOS 10 developer preview and it will be available with the iOS 10 public beta in July. The final version of Swift Playgrounds will be available in the App Store for free this fall. 2016-07-02 19:37 Darryl K

Google Project Bloks 13 Tangible Programming For Kids

Google Research is working on a new initiative to introduce kids to computing in ab entirely hands-on, physical way. A prototype has been produced to show how it the tangible programming approach combines the way children innately play with learning with computational thinking.

As explained in this introductory video Project Bloks is a research project with the aim of creating an open hardware platform to help developers, designers, and researchers build the next generation of tangible programming experiences for kids.

The project is a collaboration between Google Creative Lab, design consulting firm IDEO and Paulo Bilkstein, Assistant Professor of Education at Stanford University.

In the video Bilkstein refers to the long history of tangible programming stretching back to Seymour Papert in the 1970s. The Google Research blog post announcing Project Bloks goes further back and says that it: is preceded and shaped by a long history of educational theory and research in the area of hands-on learning. From Friedrich Froebel, Maria Montessori and Jean Piaget’s pioneering work in the area of learning by experience, exploration and manipulation, to the research started in the 1970s by Seymour Papert and Radia Perlman with LOGO and TORTIS.

Block is intended to make coding a fun activity for young children by putting it in the context of collaborative play and introducing interactivity with the real world, for example switching light bulbs on and off. Unlike Scratch Block or Blockly, where the drag and drop blocks are snippets of software, in this case they are physical control modules, called Pucks, that provide signals to go, stop, turn on and off, etc.

The main control interface, the Brain Board is built on a Raspberry Pi Zero module. It is the communication interface with the other components as well as providing power, Wi-Fi and Bluetooth connectivity. The Base Boards are connectible blocks, onto which Pucks can be placed. They are modular and can be connected in sequence and in different orientations to create different programming flows and experiences They also provide both haptic and LED feedback to the user when that control is activated and can send audio feedback to the Brain Board..

When a Puck is placed onto a Base Board it is then connected directly, or via another base board, to the Brain Board and sends that specific command back to the software.

Pucks are what make the Project Bloks system so versatile. They help bring the infinite flexibility of software programming commands to tangible programming experiences. Pucks can be programmed with different instructions, such as 'turn on or off', 'move left' or 'jump'. They can also take the shape of many different interactive forms—like switches, dials or buttons. With no active electronic components, they're also incredibly cheap and easy to make. At a minimum, all you'd need to make a puck is a piece of paper and some conductive ink.

Development on the project began in 2013, and it's being unveiled now so that Google can start gauging developer interest and finding partners who want to use the platform to build toys and educational products with it.

To show how designers, developers, and researchers might make use of the system, the Project Bloks team has created a reference device, called the Coding Kit. This lets kids learn basic concepts of programming by allowing them to put code bricks together to create a set of instructions that can be sent to control connected toys and devices - including the drawing robot shown in this video:

As this video shows, the motivation for Project Bloks is educational. The team's position paper, Project Bloks: designing a development platform for tangible programming for children concludes:

Research and design for children is our passion. Designing the Project Bloks system was, above all, an exercise to demonstrate how much children can accomplish with the right tools, how much they can learn when they are not told what to do, and how much reward exploration can bring them.

The vision of Seymour Papert 50 years ago was a powerful one: children will program the computer. It won’t be the other way around.

Project Bloks: designing a development platform for tangible programming for children (pdf) Paulo Blikstein (Stanford University), Arnan Sipitakiat (Chiang Mai University, Thailand), Jayme Goldstein (Google), João Wilbert (Google), Maggie Johnson (Google), Steve Vranakis (Google), Zebedee Pedersen (Google), Will Carey (IDEO).

Scratch not to be sniffed at! Teaching Coding To The Next Generation

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on, Twitter, Facebook, Google+ or Linkedin. 2016-07-02 19:37 Written by

Debian Edu 8 Operating 14 System Lands as a Complete Linux Solution for Your School

Based on the latest GNU/Linux 8.5 "Jessie" , Debian Edu 8 is now available for any educational institution, such as schools or universities, that want to ditch the bloated and expensive Microsoft Windows with a fresh, Linux kernel- based OS that offers them the freedom they need to fully customize the installation, as well as the unbeatable stability of Debian.

Debian Edu 8 is for you if you are a network administrator and you want to install the best possible Debian-based operating system on every computer on a school network, without having to upgrade the licenses of Microsoft Windows or buy new and expensive hardware components just because one OS doesn't work on them. Not to mention the fact that Debian Edu comes with numerous educational apps. "Do you have to administrate a computer lab or a whole school network? Would you like to install servers, workstations and laptops which will then work together? Do you want the stability of Debian with network services already preconfigured? Do you wish to have a web-based tool to manage systems and several hundred or even more user accounts? Have you asked yourself if and how older computers could be used? " reads today's announcement.

Release highlights of Debian Edu 8 include the availability of MATE 1.8 as an optional desktop environment for those who want a user interface that's modern, fully customizable and light at the same time, support for installing firmware for various hardware components automatically when the OS is instaled via network boot, and the addition of a Dutch translation of the manual, along with a complete Norwegian Bokmål translation.

Debian Edu / Skolelinux 8 is available for download right now via our website, where you'll find a multi-arch ISO images that can be used for network booting, as well as an extended, much bigger ISO image that includs more software. Please note that if you already have the Beta version of Debian Edu 8 installed on your computer(s), you can upgrade now to the final release. 2016-07-02 14:39 Marius Nestor

The Clearest Images of 15 the Samsung Galaxy Note 7 Leak Online

The images were revealed by leakster Evan Blass and show the front and back of the Galaxy Note 7, together with color variants. The leaked photos show that the Note 7 will come in three color variants: Black Onyx, Silver Titanium and Blue Coral.

Last week, Evan Blass revealed that the name of the upcoming Galaxy Note will indeed carry the 7 moniker, as rumors had suggested. It seems that Samsung wishes to bring the Note series in line with the Galaxy S series.

The leaked images also reveal that the Samsung Galaxy Note 7 will have the Samsung logo at the top of the front panel, while the design does seem to resemble that of the Galaxy S series. In addition, the location of the S Pen hasn’t changed but it remains to be seen if Samsung will make any adjustments in that department. 2016-07-02 13:45 Alexandra Vaidos

Pitivi 0.96 Video Editor 16 Promises Fast and Accurate Editing for Any Video Format

Therefore, Pitivi 0.96 arrives with the usual bug fixes and code cleanup maintenance stuff, but it also introduces a new feature, something that the Pitivi developers like to call "Proxy editing," and that it promises fast and accurate editing with any video file format. "To provide the best experience, we decided to give you the ability to seamlessly work with media formats suitable for video editing," explained the devs. "Now you can edit and render a video project with any video files with great accuracy, thanks to proxy files. "

According to the Pitivi developers, it looks like many popular video editors make use of downscaled proxy files to offer improved reliability and better video editing performance on computers that can't handle high- quality videos in real-time. Unfortunately, Pitivi is not there yet.

The proxy editing functionality added in Pitivi 0.96 is just the first step in that direction, but rest assured that it will be there, as the developers promise to bring full support for downscaled proxy files sometime after the release of Pitivi 1.0. After all, Pitivi 0.96 is only a development release.

Along with the implementation of proxy editing, the Pitivi 0.96 release brings support for automatic generation of filmstrip thumbnails and audio waveforms during the processing of proxies, which currently are only high- quality proxies designed to be used as an alternative to the original.

The Pitivi developers also inform the community that Pitivi 0.96 currently supports the Matroska (MKV) QuickTime (MOV), Ogg, and WebM containers, H.264, RAW, VP8, and Theora video codecs, as well as the FLAC, MP3, Opus, Vorbis, and RAW audio codecs without the use of proxies. For any other format, you need to use proxy files.

To see how the proxy editing works, we recommend reading the release announcement in full. In the meantime, you can download Pitivi 0.96 right now via our website, but keep in mind that this is only the source archive that you'll need to compile. If you're not OK with that, you can install the latest Pitivi version using a Flatpak package. 2016-07-02 12:52 Marius Nestor

Zenwalk 8.0 Is Based on 17 Slackware 14.2, Gets New Desktop Layout for Xfce 4.12.1

Based on the just released Slackware 14.2 operating system , 8.0 is finally here, powered by Linux kernel 4.4.14 LTS, the same one that powers the monumental Slackware Linux, thus offering users support for the latest hardware devices. Zenwalk's default desktop environment is Xfce 4.12.1, and it now ships with a new layout that's more user-friendly than ever.

But Zenwalk 8.0 is loaded with some of the best and most up to date open- source software applications. For example, you'll be able to write any office document with LibreOffice 5.1.3, surf the Web with Chromium 51.0, as well as to enjoy your movies with the powerful MPlayer 1.3 and the FFmpeg 3.0.1 multimedia backend.

"Zenwalk 8.0 is a "less than 1GB ISO" pure Slackware system with added post-install configurations, optimizations and tunings already done out of the box, with a ready to use polished desktop environment, with added graphical system tools, added office and multimedia applications, and striped to keep just "one application per task," said Jean-Philippe Guillemin.

Starting with Zenwalk 8.0, the GNU/Linux distribution is now available only for 64-bit computers, as the developer thinks 32-bit PCs are hard to find these days, and maintaining both ports takes a lot of time. "I believe that the old 32 bits architecture is for small specialized systems only, not for desktop," explains the developer while encouraging the community to collaborate and port the OS to the 32- bit platform.

Under the hood, Zenwalk 8.0 comes with tweaked I/O and CPU scheduling policies to allow desktop applications to perform much better, there's also the integration of the Policykit privileges elevation features that allows users to configure various things that would normally require administrative privileges (root access), such as changing the system clock, set the system locale and login manager.

Last but not least, Netpkg, Zenwalk's default , and the system installer have both received various improvements in Zenwalk 8.0 to make installing the OS and your entire desktop experience more comfortable. The Zenwalk 8.0 ISO images are available for download right now via our website, but, again, please note that you can only install the GNU/Linux distribution on a 64-bit capable computer. 2016-07-02 11:53 Marius Nestor

Facebook's new multilingual will 18 let you post status updates in multiple languages

Facebook has announced a new tool that aims to break down language barriers. Previously available for Pages on the social network, the multilingual composer is set to make its way to individuals' accounts as well.

Previously anyone who wanted to post bi-, tri- or multilingual status updates would have to either type out each language as a separate post, or post a lengthy status featuring multiple translations. The latest change means that you can write in, say, French, and only friends and followers with their language set to French will see it.

When the multilingual composer hits your account you'll see a new 'Write post in another language' option at the bottom of the status composition box. You can then pen the same status in another language, use a drop-down menu to indicate the audience it is aimed at, and publish. To make things even easier, Facebook is also offering automatic translations, but points out that these can be tweaked or ignored:

While the aim is to allow for multi- language posts, there is potential for the multilingual composer to be put to other uses. It could, for instance, be used to post entirely different messages to people speaking different languages.

While the idea sounds fairly simple, Facebook says that there was a great deal of work required to get it up and running properly. It also says that the translations users input will be used to improve machine translation.

For now, the multilingual composer is rolling out to a test group. If you are part of this group, head to the Language section of your account settings and enable the Multilingual Posts option.

Photo credit: Maxx-Studio / Shutterstock 2016-07-02 11:40 By Mark

Zend PHP framework 19 upgrade focuses on performance, middleware

Zend Framework, a collection of PHP packages for building web applications, has been upgraded to version 3. The new version provides dramatic increases in performance and a microframework for middleware development, Zend said.

Version 3 has been measured to be as much as four times faster with PHP 5 -- and it's even better than that with PHP 7, the latest version of the server-side scripting language , said Rogue Wave's Matthew Weir O'Phinney, project lead for Zend Framework. ( There was no PHP 6.) In fact, support for PHP 7 is a key feature of Zend Framework 3.

Version 3, which is the first major release of the framework in four years, is currently available for download. The open source framework features packages that have been installed more than 59 million times, according to Zend, which was acquired in October by Rogue Wave Software.

Another highlight of the upgrade is the inclusion of the Expressive middleware microframework. Developers can use the Expressive runtime to build interfaces for routing and templating.

"Yes, you read that correctly: Zend Framework now ships with a microframework as a parallel offering to its MVC full-stack framework," O'Phinney said. "For users new to Zend Framework who are looking for a place to dive in, we recommend Expressive, as we feel PSR-7 (PHP Standards Recommendation) middleware represents the future of PHP application development. "

For MVC (Model View Controller) development, Zend is including a new version of the Zend skeleton application , which leverages the Zend MVC layer and module systems. Version 3 also offers enhancements to documentation and de-coupling. De-coupling enables reuse in a greater number of contexts, O'Phinney said. "In some cases, this has meant the creation of new packages that either split concerns or provide integration between multiple components. "

Documentation, meanwhile, is now included with each component repository. Contributors can be blocked for lack of documentation and deployment of documentation is now automated.

"For newcomers to the framework, we have been working on our package architecture and attempting to make each package installable with a minimal amount of dependencies," O'Phinney said. All components now are developed independently, with their own release schedules.

The upgrade also features zend- diactoros , which is an HTTP messaging implementation, and zend-stratigility , a middleware foundation for building middleware pipelines based around the Sencha Connect middleware layer for Node.js. Additionally, Zend has included forward compatibility features to help with migration from Zend Framework 2 to version 3. Migration guides are featured as well.

Zend is stopping development of Zend Framework 1 now that Zend Framework 3 has arrived, O'Phinney said. Version 1 will reach end-of-life status on Sept. 28; only security fixes will be provided between now and then. After Sept. 28, custom bug and security fixes will be offered for enterprise users of Zend Server.

Tags php

More about Rogue Wave Software 2016-07-02 03:41 www.computerworld

FloSIS: A super-fast 20 network flow capture system for efficient flow retrieval -- ScienceDaily Network packet capture performs essential functions in modern network management such as attack analysis, network troubleshooting, and performance debugging. As the network edge bandwidth currently exceeds 10 Gbps, the demand for scalable packet capture and retrieval is rapidly increasing. However, existing software-based packet capture systems neither provide high performance nor support flow-level indexing for fast query response. This would either prevent important packets from being stored or make it too slow to retrieve relevant flows.

A research team led by Professor KyoungSoo Park and Professor Yung Yi of the School of Electrical Engineering at Korea Advanced Institute of Science and Technology (KAIST) have recently presented FloSIS, a highly scalable software- based network traffic capture system that supports efficient flow-level indexing for fast query response.

FloSIS is characterized by three key advantages. First, it achieves high- performance packet capture and disk writing by exercising full parallelism in computing resources such as network cards, CPU cores, memory, and hard disks. It adopts the PacketShader I/O Engine (PSIO) for scalable packet capture and performs parallel disk writes for high-throughput flow dumping. Towards high zero-drop performance, it strives to minimize the fluctuation of packet processing latency.

Second, FloSIS generates two-stage flow-level indexes in real time to reduce the query response time. The indexing utilizes Bloom filters and sorted arrays to quickly reduce the search space of a query. Also, it is designed to consume only a small amount of memory while allowing flexible queries with wildcards, ranges of connection tuples, and flow arrival times.

Third, FloSIS supports flow-level content deduplication in real time for storage savings. Even with deduplication, the system still records the packet-level arrival time and headers to provide the exact timing and size information. For an HTTP connection, FloSIS parses the HTTP response header and body to maximize the hit rate of deduplication for HTTP objects.

These design choices bring enormous performance benefits. On a server machine with dual octa-core CPUs, four 10Gbps network interfaces, and 24 SATA disks, FloSIS achieves up to 30 Gbps for packet capture and disk writing without a single packet drop. Its indexes take up only 0.25% of the stored content while avoiding slow linear disk search and redundant disk access. On a machine with 24 hard disks of 3 TB, this translates into 180 GB for 72 TB total disk space, which could be managed entirely in memory or stored into solid state disks for fast random access. Finally, FloSIS deduplicates 34.5% of the storage space for 67 GB of a real traffic trace only with 256 MB of extra memory consumption for a deduplication table. In terms of performance, it achieves about 15 Gbps zero-drop throughput with real-time flow deduplication.

This work is presented at 2015 USENIX Annual Technical Conference (ATC) on July 10 2015 in Santa Clara, California. 2016-07-02 19:37 feeds.sciencedaily

'On-the-fly' 3-D print 21 system prints what you design, as you design it -- ScienceDaily

But what if you decide to make changes? You may have to go back, change the design and print the whole thing again, perhaps more than once. So Cornell researchers have come up with an interactive prototyping system that prints what you are designing as you design it; the designer can pause anywhere in the process to test, measure and, if necessary, make changes that will be added to the physical model still in the printer. "We are going from human-computer interaction to human-machine interaction," said graduate student Huaishu Peng, who described the On- the-Fly-Print system in a paper presented at the 2016 ACM Conference for Human Computer Interaction. Co-authors are François Guimbretière, associate professor of information science; Steve Marschner, professor of computer science; and doctoral student Rundong Wu.

Their system uses an improved version of an innovative "WirePrint" printer developed in a collaboration between Guimbretière's lab and the Hasso Platner Institute in Potsdam, Germany.

In conventional 3-D printing, a nozzle scans across a stage depositing drops of plastic, rising slightly after each pass to build an object in a series of layers. With the WirePrint technique the nozzle extrudes a rope of quick-hardening plastic to create a wire frame that represents the surface of the solid object described in a computer-aided design (CAD) file. WirePrint aimed to speed prototyping by creating a model of the shape of an object instead of printing the entire solid. The On-the- Fly-Print system builds on that idea by allowing the designer to make refinements while printing is in progress.

The new version of the printer has "five degrees of freedom. " The nozzle can only work vertically, but the printer's stage can be rotated to present any face of the model facing up; so an airplane fuselage, for example, can be turned on its side to add a wing. There is also a cutter to remove parts of the model, say to give the airplane a cockpit.

The nozzle has been extended so it can reach through the wire mesh to make changes inside. A removable base aligned by magnets allows the operator to take the model out of the printer to measure or test to see if it fits where it's supposed to go, then replace it in the precise original location to resume printing.

The software -- a plug-in to a popular CAD program -- designs the wire frame and sends instructions to the printer, allowing for interruptions. The designer can concentrate on the digital model and let the software control the printer. Printing can continue while the designer works on the CAD file, but will resume when that work is done, incorporating the changes into the print.

As a demonstration the researchers created a model for a toy airplane to fit into a Lego airport set. This required adding wings, cutting out a cockpit for a Lego pilot and frequently removing the model to see if the wingspan is right to fit on the runway. The entire project was completed in just 10 minutes.

By creating a "low-fidelity sketch" of what the finished product will look like and allowing the designer to redraw it as it develops, the researchers said, "We believe that this approach has the potential to improve the overall quality of the design process. "

A video can be found here: https://www.youtube.com/watch? v=X68cfl3igKE 2016-07-02 19:37 feeds.sciencedaily

Living in the '90s? So are underwater wireless networks: Engineers are speeding them up to 22 improve tsunami detection, walkie-talkies for scuba divers, and search-and-rescue work -- ScienceDaily

The flashback is due to the speed of today's underwater communication networks, which is comparable to the sluggish dial-up modems from America Online's heyday. The shortcoming hampers search-and- rescue operations, tsunami detection and other work.

But that is changing due in part to University at Buffalo engineers who are developing hardware and software tools to help underwater telecommunication catch up to its over-the-air counterpart.

Their work, including ongoing collaborations with Northeastern University, is described in a study -- Software-Defined Underwater Acoustic Networks: Toward a High-Rate Real- Time Reconfigurable Modem -- published in November in IEEE Communications Magazine .

"The remarkable innovation and growth we've witnessed in land-based wireless communications has not yet occurred in underwater sensing networks, but we're starting to change that," says Dimitris Pados, PhD, Clifford C. Furnas Professor of Electrical Engineering in the School of Engineering and Applied Sciences at UB, a co-author of the study.

The amount of data that can be reliably transmitted underwater is much lower compared to land-based wireless networks. This is because land-based networks rely on radio waves, which work well in the air, but not so much underwater.

As a result, sound waves (such as the noises dolphins and whales make) are the best alternative for underwater communication. The trouble is that sound waves encounter such obstacles as path loss, delay and Doppler which limit their ability to transmit. Underwater communication is also hindered by the architecture of these systems, which lack standardization, are often proprietary and not energy-efficient. Pados and a team of researchers at UB are developing hardware and software - everything from modems that work underwater to open-architecture protocols -- to address these issues. Of particular interest is merging a relatively new communication platform, software- defined radio, with underwater acoustic modems.

Traditional radios, such as an AM/FM transmitter, operate in a limited bandwidth (in this case, AM and FM). The only way to pick up additional signals, such as sound waves, is to take the radio apart and rewire it. Software-defined radio makes this step unnecessary. Instead, the radio is capable via computer of shifting between different frequencies of the electromagnetic spectrum. It is, in other words, a "smart" radio.

Applying software-defined radio to acoustic modems could vastly improve underwater data transmission rates. For example, in experiments last fall in Lake Erie, just south of Buffalo, New York, graduate students from UB proved that software-defined acoustic modems could boost data transmission rates by 10 times what today's commercial underwater modems are capable of.

The potential applications for such technology includes: 2016-07-02 19:37 feeds.sciencedaily

Mapping software tracks threats to endangered 23 species: Software helps conservationists predict species movement -- ScienceDaily

The Duke team used the software and images to assess recent forest loss restricting the movement of Peru's critically endangered San Martin titi monkey ( Callicebus oenanthe ) and identify the 10 percent of remaining forest in the species' range that presents the best opportunity for conservation.

"Using these tools, we were able to work with a local conservation organization to rapidly pinpoint areas where reforestation and conservation have the best chance of success," said Danica Schaffer-Smith, a doctoral student at Duke's Nicholas School of the Environment, who led the study. "Comprehensive on-the-ground assessments would have taken much more time and been cost-prohibitive given the inaccessibility of much of the terrain and the fragmented distribution and rare nature of this species. "

The San Martin titi monkey inhabits an area about the size of Connecticut in the lowland forests of north central Peru. It was recently added to the International Union for Conservation of Nature's list of the 25 most endangered primates in the world.

Increased farming, logging, mining and urbanization have fragmented forests across much of the monkey's once- remote native range and contributed to an estimated 80 percent decrease in its population over the last 25 years.

Titi monkeys travel an average of 663 meters a day, primarily moving from branch to branch to search for food, socialize or escape predators. Without well-connected tree canopies, they're less able to survive local threats and disturbances, or recolonize in suitable new habitats. The diminutive species, which typically weighs just two to three pounds at maturity, mate for life and produce at most one offspring a year. Mated pairs are sometimes seen intertwining their long tails when sitting next to each other.

Armed with Aster and Landsat satellite images showing the pace and extent of recent forest loss, and GeoHAT, a downloadable geospatial habitat assessment toolkit developed at Duke, Schaffer-Smith worked with Antonio Bóveda-Penalba, program coordinator at the Peruvian NGO Proyecto Mono Tocón, to prioritize where conservation efforts should be focused.

"The images and software, combined with Proyecto Mono Tocón's detailed knowledge of the titi monkey's behaviors and habitats, allowed us to assess which patches and corridors of the remaining forest were the most critical to protect," said Jennifer Swenson, associate professor of the practice of geospatial analysis at Duke, who was part of the research team.

The team's analysis revealed that at least 34 percent of lowland forests in the monkey's northern range, Peru's Alto Mayo Valley, have been lost. It also showed that nearly 95 percent of remaining habitat fragments are likely too small and poorly connected to support viable populations; and less than 8 percent of all remaining suitable habitats lie within existing conservation areas.

Areas the model showed had the highest connectivity comprise just 10 percent of the remaining forest in the northern range, along with small patches elsewhere. These forests present the best opportunities for giving the highly mobile titi monkey the protected paths for movement it needs to survive.

Based on this analysis, the team identified a 10-kilometer corridor between Peru's Morro de Calzada and Almendra conservation areas as a high priority for protection.

"For many rare species threatened by active habitat loss, the clock is literally ticking," Schaffer-Smith said. "Software tools like GeoHAT -- or similar software such as CircuitScape -- can spell the difference between acting in time to save them or waiting till it's too late. "

Schaffer-Smith, Swenson and Bóveda- Penalba published their peer-reviewed research March 16 in the journal Environmental Conservation.

GeoHAT is a suite of ArcGIS geoprocessing tools designed to evaluate overall habitat quality and connectivity under changing land-use scenarios. It was developed by John Fay, an instructor in the Geospatial Analysis Program at Duke's Nicholas School, and can be used to assess habitats for a wide range of land-based species. (Learn More: http://sites.duke.edu/johnfay/projects/geohat/ ) 2016-07-02 19:37 feeds.sciencedaily

Diagnosing ear infection 24 using smartphone -- ScienceDaily

"Because of lack of health personnel in many developing countries, ear infections are often misdiagnosed or not diagnosed at all. This may lead to hearing impairments, and even to life-threatening complications," says Claude Laurent, researcher at the Department of Clinical Sciences at Umeå University and co-author of the article. "Using this method, health personnel can diagnose middle ear infections with the same accuracy as general practitioners and paediatricians. Since the system is cloud-based, meaning that the images can be uploaded and automatically analysed, it provides rapid access to accurate and low-cost diagnoses in developing countries. "

The researchers at Umeå University have collaborated with the University of Pretoria in South Africa in their effort to develop an image-processing technique to classify otitis media. The technique was recently described in the journal EBioMedicine -- a new Lancet publication.

The software system consists of a cloud-based analysis of images of the eardrum taken using an otoscope, which is an instrument normally used in the medical examination of ears. Images of eardrums, taken with a digital otoscope connected to a smartphone, were compared to high-resolution images in an archive and automatically categorised according to predefined visual features associated with five diagnostic groups.

Tests showed that the automatically generated diagnoses based on images taken with a commercial video-otoscope had an accuracy of 80.6 per cent, while an accuracy of 78.7 per cent was achieved for images captured on-site with a low cost custom-made video- otoscope. This high accuracy can be compared with the 64-80 per cent accuracy of general practitioners and paediatricians using traditional otoscopes for diagnosis.

"This method has great potential to ensure accurate diagnoses of ear infections in countries where such opportunities are not available at present. Since the method is both easy and cheap to use, it enables rapid and reliable diagnoses of a very common childhood illness," says Claude Laurent. 2016-07-02 19:37 feeds.sciencedaily

Google glass meets 25 organs-on-chips -- ScienceDaily

Google Glass, one of the newest forms of wearable technology, offers researchers a hands-free and flexible monitoring system. To make Google Glass work for their purposes, Zhang et al. custom developed hardware and software that takes advantage of voice control command ("ok glass") and other features in order to not only monitor but also remotely control their liver- and heart-on-a-chip systems. Using valves remotely activated by the Glass, the team introduced pharmaceutical compounds on liver organoids and collected the results. Their results appear this week in Scientific Reports.

"We believe such a platform has widespread applications in biomedicine, and may be further expanded to health care settings where remote monitoring and control could make things safer and more efficient," said senior author Ali Khademhosseini, PhD, Director of the Biomaterials Innovation Research Center at BWH.

"This may be of particular importance in cases where experimental conditions threaten human life -- such as work involving highly pathogenic bacteria or viruses or radioactive compounds," said leading author, Shrike Zhang, PhD, also of BWH's Biomedical Division. 2016-07-02 19:37 feeds.sciencedaily

Apple Open-sources its 26 New Compression Algorithm LZFSE

Apple has open-sourced its new lossless compression algorithm, LZFSE , introduced last year with iOS 9 and OS X 10.10. According to Apple, LZFE provides the same compression gain as ZLib level 5 while being 2x– 3x faster and with higher energy efficiency.

LZFSE is based on Lempel-Ziv and uses Finite State Entropy coding , based on Jarek Duda’s work on Asymmetric Numeral Systems (ANS) for entropy coding. Shortly, ANS aims to “end the trade-off between speed and rate” and can be used both for precise coding and very fast encoding, with support for data encryption. LZFSE is one of a growing number of compression libraries that use ANS in place of the more traditional Huffman and arithmetic coding.

Admittedly, LZFSE does not aim to be the best or fastest algorithm out there. In fact, Apple states that LZ4 is faster than LZFSE while LZMA provides a higher compression ratio, albeit at the cost of being an order of magnitude slower than other options available in Apple SDKs. LZFSE is Apple’s suggested option when compression and speed are more or less equally important and you want reduce energy consumption.

LZFSE reference implementation is available on GitHub. Building on macOS is as easy as executing:

If you want to build LZFSE for a current iOS device, you can execute:

Besides its API documentation , a useful resource to start using LZFSE is a sample project that Apple made available last year to show how to use LZFSE both for block and stream compression.

LZFSE follows on Google’s brotli , which was open sourced last year. In comparison to LZFSE, brotli seems to be tuned for a different use case, such as compressing static Web assets and Android APKs, where compression rates are of prime importance. 2016-07-02 20:38 Sergio De

Oracle Ordered To Pay 27 HPE $3 Billion In Itanium Suit

Oracle has been ordered to pay Hewlett Packard Enterprise $3 billion in damages in a long-running, breach of contract lawsuit that claimed Oracle stopped software support of the Itanium server platform in March 2011, hurting HP and its customers.

A jury in Silicon Valley delivered the victory to HPE Thursday and said Oracle should pay the full amount of damages HPE sought in the lawsuit, notes the Wall Street Journal.

Back in 2011, prior to Hewlett Packard the splitting itself into two separate companies , HP filed a lawsuit against Oracle, when the enterprise software maker announced it would discontinue support for the high-end HP Itanium servers. At the time, Oracle claimed it did not have a binding agreement to support Itanium and, according to the Journal, its decision was also based on a perception that Intel itself had moved to discontinue support on the Itanium chip that it had created.

HP, however, contended the decision was based on a move by Oracle to push its customers over to the servers it acquired through its Sun Microsystems acquisition , which were running Oracle software.

A judge sided with HP that the contract was valid , in a 2012 ruling. In that ruling, the Santa Clara County Superior Court Judge James Kleinberg said that the 2010 contract explicitly stated mutual product support would be provided by both companies as it had in the past. The judge ordered Oracle to resume support for HP's HP/UX servers running on Intel Itanium chips.

The Itanium servers are slated to continue on HPE's roadmap through 2025, according to comments from HPE's product management director for enterprise servers Jeff Kyle in a Computerworld report earlier this year. HPE, for example, says it plans to have its new Superdome and Integrity servers running on Intel's next generation Itanium chip that is code- named Kittson.

[Read Google Beats Oracle In Long- Running Java Copyright Case .]

Oracle says it plans to appeal that earlier verdict as well as this latest $3 billion jury order.

"Oracle never believed it had a contract to continue to port our software to Itanium indefinitely and we do not believe so today; nevertheless, Oracle has been providing all its latest software for the Itanium systems since the original ruling while HP and Intel stopped developing systems years ago," Dorian Daley, Oracle general counsel, said in a statement provided to InformationWeek. "Further, it is very clear that any contractual obligations were reciprocal and HP breached its own obligations. Now that both trials have concluded, we intend to appeal both [Thursday's] ruling and the prior ruling from Judge Kleinberg. "

HPE was not immediately available for a comment. 2016-07-02 11:06 Dawn Kawamoto

28 For everyone

That was his reply when I asked what he perceived the imminent threats to the web to be. The use of old exploits to get at low-hanging XP systems notwithstanding, future threats to the web will require a little ingenuity to figure out — not least of which because the stakes are so high, Berners-Lee said.

“Because of the nature of a medium that’s used by pretty much everybody and pretty much everything, to be able to control it is just ridiculously powerful.”

And not in simple terms, either. Censorship or fast lanes, for example, are minor threats compared to that of pervasive surveillance.

“There are countries in the Middle East where they love you to go to the opposition website. It’s just that when you do, you and your friends are marked and you’ll disappear in the middle of the night,” said Berners-Lee. “The ability to understand which people are veering in which particular direction is a tremendously powerful tool.”

Positions of leverage, in the guise of (and perhaps even in the spirit of) philanthropy also constitute danger.

“Maybe it’s a social network that decides it’s going to go to India — and it’s going to be the entire web for everyone in India — but end up leaving the whole country beholden to one particular commercial concern for their news, and the selection of what they do every day.”

The real threat is not in the possibility of a specific kind of authoritarian control, but the subtler, more insidious manners of control can create outcomes as bad or worse as the rest.

“As a journalist,” he said, pointing in my direction (perhaps he thought he saw a journalist behind me), “spotting these things is part of your job, but also having the imagination to realize what new threats there could be.”

I asked Berners-Lee whether he felt that the titans of the tech industry exert an undue influence on the way the web functions — whether they were, simply by the scale of their operations, a problem for the open web.

“You’re talking about the age-old story of capitalism and monopoly,” he began. “If you have a system that rewards people for gaining market share, when they get a very large portion of the market share, then to a certain extent everybody suffers because innovation drops off.”

So far, so ordinary, at least for the last century or so. “But people were very worried about Netscape,” Berners-Lee pointed out. “And then suddenly they stopped worrying about Netscape, and they started worrying about Microsoft — because it controls the operating system as well as the browser. Then they decided the browser doesn’t matter. It was actually about the search engine people used. And then they realized that, actually, the search engine doesn’t matter because people only use it to go to one social network, and people are spending all their time there.”

The pattern is clear: “It’s reasonable to worry about monopolies when they happen, because they’re an impediment to innovation and fun and creativity. But also notice that the monopoly we’re concerned about can switch very quickly.” It’s hard to imagine thinking of the Google or Facebook monopolies (or however you’d like to call them) as quaint five or 10 years from now, but by that time we may be worrying about VR agents and IoT viruses literally tracking our every move inside our homes. The internet and the web are evolving fast, and locked into co-evolution with them are the bad actors who have infested, dominated and ultimately improved them. 2016-07-02 09:05 Devin Coldewey

29 Umi Touch Review: All specs, no polish

You probably haven’t heard of the company Umi. They’re a relatively unknown Chinese manufacturer with a mission to make affordable devices with compelling feature sets. And they’re not alone in pursuing that formula: there are tons of companies throughout Asia trying their hardest to grab a share of the entry- level market.

OnePlus, another Chinese startup founded by former Oppo employees, succeeded in competing with other electronics giants by selling well rounded and affordable Android phones, however they tend to be the exception rather than the rule. Historically I've taken issue with smaller OEMs and the devices they produce as they rarely live up to their advertised claims, making quality a big hit or a huge miss seemingly at random.

However, I am willing to give any company a chance to impress, which is why I have the Umi Touch in the office to review. Priced at $160, the Touch carries a respectable list of specs including a 5.5-inch 1080p display, 13- megapixel Sony IMX328 camera, a huge 4,000 mAh battery, a metal design, and even a fingerprint sensor. It’s running the latest version of Android as well, without much bloatware or customization.

So can the Umi Touch break away from the norm and actually present a good package from a lesser known Chinese manufacturer? I’ve spent more than a month with the Touch to find out.

The design of the Umi Touch is nothing too fancy. The phone is a pretty typical rounded-rectangle with glass on the front, and metal on the back that curves around each long edge. This metal is flanked by plastic along the top and bottom of the rear, finished to look similar to the metal plate.

This method of disguising plastic doesn’t often work from a visual standpoint – the difference in materials is obvious – and is something not seen on premium handsets. Of course at this price point, the Umi Touch isn’t a premium device, so I can forgive the company for opting to design it in this way.

I certainly prefer the metal back to the plastic build budget devices like the Moto G employ, and in general this 5.5- inch device is comfortable to hold thanks to decent curvature along each edge. However, there’s no mistaking this metal design for the best on the market; the HTC 10 and Nexus 6P , for example, are still several steps ahead in terms of visual appeal. There are a couple of design issues with the Umi Touch that expose it as a cheap handset. The seams between the metal and plastic on the rear are very noticeable and not particularly even, which is something a high-end phone manufacturer wouldn’t tolerate. The front panel also lacks symmetry in design: the front camera is at a different height to the front flash, and the home button below the display is very slightly askew in the unit I received to review.

Most damning is the difference between the press renders of the Umi Touch’s front panel and the real model. The renders appear to show a large display with very little bezel at the edges, but in reality the bezels are considerably larger and quite noticeable. I don’t like this sort of deception at all, as the renders portray a design that’s significantly more attractive than the actual device. Buyers expecting a sleek, bezel free design could be seriously disappointed when their Touch arrives in the mail.

The Umi Touch (left) is almost as large as the Google Nexus 6P (right)

The bezel size makes this 5.5-inch phone larger than average: it’s bigger than the Galaxy S7 Edge by a decent margin, and almost the same size as my 5.7-inch Nexus 6P. At 8.5mm and nearly 200 grams, it’s a thicker and heavier device too. The weight in particular is very noticeable, as it helps to make the Touch feel dense for a phone of this class, probably due to the 4,000 mAh battery inside.

The glass panel on the front is both good and bad. I like the way the glass curves away to the polished metal rim, creating a “2.5D” edgeless feel while swiping across it. However, the coating Umi has used isn’t the same quality as most other smartphones, which makes it a bigger fingerprint magnet and reduces the swooshability. Swiping across the display has more resistance than the 2015 Moto G , for example.

Below the display is a fingerprint sensor; something seldom seen at this price point. The sensor is touch- activated, like most fingerprint sensors found in smartphones, and it works far better than I was expecting. It’s similar in speed to a Nexus 6P and only slightly less accurate, plus it doubles as a capacitive touch home button. Kudos to Umi for getting this key feature of their smartphone working well.

Umi has designed the other capacitive navigation buttons, found to the left and right of the fingerprint sensor, in a similar fashion to OnePlus. There are no logos on the buttons, which can be a little confusing before you realize the right button is back, and the left button is menu. I’d have far preferred to see the back button on the left, and an app switcher button on the right, as opposed to the legacy and generally unnecessary menu button. A software option to change this, similar to OnePlus, would be much appreciated.

The power button and volume rocker are found on the right side in a comfortable position, and both exhibit a decent clicky response. On the bottom is a standard micro-USB port, while the 3.5mm audio jack is on the top. The left edge features a single tray with two slots: one for a micro-SIM, and another for either a second micro-SIM or a microSD card depending on your needs.

There is a single speaker on the Umi Touch, located at the bottom of the rear panel. There’s no front facing audio here, so you might need to cup your hands around the speaker to get a decent audio experience. Quality is very average as you’d imagine, and it’s not particularly loud either. 2016-07-02 19:37 Tim Schiesser

AT&T answers T-Mobile 30 Tuesdays with 'Thanks' customer appreciation program

T-Mobile Tuesdays , the rewards app from the nation’s third largest wireless provider that “thanks” its customers with various freebies each Tuesday, has spawned its first competing program. AT&T Thanks is a customer appreciation program which AT&T will use to thank its customers for choosing them.

Kicking things off is Ticket Twosdays , a promotion that’ll give AT&T postpaid customers a free movie ticket when they buy one at full price through movietickets.com. The idea is that, each Tuesday, you can take a friend to the movies with you – so long as you have a participating theater (AMC Theaters and Regal Entertainment Group) nearby.

AT&T says tickets will be available to qualifying customers, one per account, once a week for the duration of the AT&T Thanks program (while supplies last each week).

This fall, through a partnership with Live Nation, AT&T will be launching its first presale concert offer. AT&T postpaid wireless customers will get presale access to tickets to select concerts before they are available to the general public.

AT&T adds on its Thanks website that it’ll have other limited-time, surprise offers in store including device and accessory perks, data giveaways and more.

The program is clearly in response to T-Mobile Tuesdays but that’s not a bad thing, especially if you’re a subscriber in position to reap the benefits.

If AT&T plays its cards right, it may actually be more successful than T- Mobile’s appreciation app which has experienced various technical difficulties since launch. Worse yet, T- Mobile’s biggest partner, Domino’s, had to back out of its free pizza promotion just two weeks in citing overwhelming demand. 2016-07-02 19:37 Shawn Knight

31 Sony adds three new series to its 4K TV lineup

Sony this week announced a trio of new 4K Ultra HD television series. The XBR-X800D, the XBR-X750D and the XBR-X700D span a variety of sizes and consumer needs but perhaps more importantly, they are all cheaper than the flagships Sony launched earlier this year.

The higher-end XBR-X800D series sets support High Dynamic Range (HDR), a feature that Sony says will be coming to the XBR-X750D and XBR-X700D series later this year via firmware update. As such, I probably wouldn’t make this feature the sole deciding factor when evaluating the new models.

All three series also run Google’s Android TV operating system. In addition to being able to access a wealth of streaming apps, Android TV also allows users to control their home automation and IoT devices including lights, thermostats and window shades.

The cheapest of the bunch, the XBR- 49X700D with a 49-inch display, carries an MSRP of $999.99. Bumping up to a 55-inch display increases the cost to $1,499.99. The larger XBR-65X750D with a 65-inch panel will command $2,299.99 while the 43-inch XBR- 43X800D and the 49-inch XBR- 49X800D are priced at $1,299.99 and $1,499.99, respectively.

All of the new sets are available for pre- order as of writing and go on sale next month at Amazon, Best Buy and select authorized Sony dealers across the country. 2016-07-02 19:37 Shawn Knight

Amazon aims to inspire 32 employees with the three Biospheres it's building at the new company HQ

Does the view outside your office windows leave plenty to be desired? Do you gaze out onto a landfill site while at work? Then maybe you should try to get a job at the new offices Amazon is building in downtown Seatle that will look out onto three 100- foot-tall glass spheres.

The domes, which Amazon calls Biospheres, are set to open in 2018. In addition to being architectural wonders, they will act as conservation locations by housing more than 300 endangered plant species from around the world.

The Biospheres aren’t just there to give Amazon employees something nice to look at; the online retail giant wants its employees to gain inspiration by walking and working inside the domes. Staff will be able to traverse the numerous suspension bridges within the structures, and can have brainstorming sessions in the bird's nest-style meeting spaces built in the mature trees.

John Schoettler, Amazon's global real estate director, told Bloomberg the goal was to create a "link to the natural world" that helps staff become more creative, productive, and happy.

When the entire project is completed, the domes and the three 500-foot towers will provide Amazon with 10 million square feet of office space on a campus covering more than ten square blocks. It will allow the company to double its number of Seattle staff to 50,000 in the next decade.

Jeff Bezos will no doubt hope that the campus helps improve Amazon’s tarnished image when it comes to employee work environments. The CEO was forced to refute claims made in a New York Times report last year that described the company as a demanding and degrading employer. And many of its warehouses in Europe have faced criticism over their working conditions. 2016-07-02 19:37 Rob Thubron

This guy designed car 33 tires that roll in any direction

Autonomous driving aids have made parallel parking much less of a hassle… that is, if you have a newer car equipped with the proper technology. If you’re rocking an older ride, you still need to master the technique yourself or follow in the footsteps of YouTube user William Liddiard and outfit your car with omnidirectional wheels.

In the video’s description, Liddiard said he had to use the materials he had on hand (aka, lots of improvisation) and work on it sparingly when he could find the time. The system supplies 24,000 pounds of torque directly to the wheels and although it is a proof-of-concept prototype, Liddiard said the wheels are designed to be used in all weather and road conditions.

Back in March, Goodyear unveiled a set of spherical, levitating concept tires that essentially do the same thing albeit on a much more polished and advanced scale.

Found is a TechSpot feature where we share clever, funny or otherwise interesting stuff from around the web. 2016-07-02 19:37 Shawn Knight

Tracking brain atrophy in 34 MS could become routine, thanks to new software -- ScienceDaily

That may be changing. Starting next month, University at Buffalo researchers will be testing in the U. S., Europe, Australia and Latin America a new software tool they developed that could make assessing brain atrophy part of the clinical routine for MS patients. The research is funded by Novartis, as part of its commitment to advance the care for people with MS with effective treatments and tools for assessment of disease activity.

According to the UB researchers, being able to routinely measure how much brain atrophy has occurred would help physicians better predict how a patient's disease will progress. It could also provide physicians with more information about how well MS treatments are working in individual patients. These and other benefits were outlined in a recent review study the researchers published in Expert Review of Neurotherapeutics.

"Measuring brain atrophy on an annual basis will allow clinicians to identify which of their patients is at highest risk for physical and cognitive decline," said Robert Zivadinov, MD, PhD, professor of neurology and director of the Buffalo Neuroimaging Analysis Center in the Jacobs School of Medicine and Biomedical Sciences at UB. Over the past 10 years, he and his colleagues at UB, among the world's most prolific groups studying brain atrophy and MS, developed the world's largest database of magnetic resonance images of individuals with MS, consisting of 20,000 brain scans with data from about 4,000 MS patients. The new tool, Neurological Software Tool for Reliable Atrophy Measurement in MS, or NeuroSTREAM, simplifies the calculation of brain atrophy based on data from routine magnetic resonance images and compares it with other scans of MS patients in the database.

More than lesions

Without measuring brain atrophy, clinicians cannot obtain a complete picture of how a patient's disease is progressing, Zivadinov said.

"MS patients experience, on average, about three to four times more annual brain volume loss than a healthy person," he said. "But a clinician can't tell a patient, 'You have lost this amount of brain volume since your last visit.'"

Instead, clinicians rely primarily on the presence of brain lesions to determine how MS is progressing. "Physicians and radiologists can easily count the number of new lesions on an MRI scan," said Zivadinov, "but lesions are only part of the story related to development of disability in MS patients. "

And even though MS drugs can stop lesions from forming, in many cases brain atrophy and the cognitive and physical decline it causes will continue, the researchers say.

"While the MS field has to continue working on solving challenges related to brain atrophy measurement on individual patient level, its assessment has to be incorporated into treatment monitoring, because in addition to assessment of lesions, it provides an important additional value in determining or explaining the effect of disease-modifying drugs," Zivadinov and co-authors wrote in a June 23 editorial that was part of a series of commentaries in Multiple Sclerosis Journal addressing the pros and cons of using brain atrophy to guide therapy monitoring in MS.

Soon, the UB researchers will begin gathering data to create a database of brain volume changes in more than 1,000 patients from 30 MS centers in the U. S. and around the world. The objective is to determine if NeuroSTREAM can accurately quantify brain volume changes in MS patients.

The software runs on a user-friendly, cloud-based platform that provides compliance with privacy health regulations such as HIPAA. It is easily available from workstations, laptops, tablets, iPads and smartphones. The ultimate goal is to develop a user- friendly website to which clinicians can upload anonymous scans of patients and receive real-time feedback on what the scans reveal.

NeuroSTREAM measures brain atrophy by measuring a certain part of the brain, called the lateral ventricular volume (LVV), one of the brain structures that contains cerebrospinal fluid. When atrophy occurs, the LVV expands.

Canary in the coal mine

"The ventricles are a surrogate measure of brain atrophy," said Michael G. Dwyer III, PhD, assistant professor in the Department of Neurology and the Department of Bioinformatics in the Jacobs School of Medicine and Biomedical Sciences at UB. "They're the canary in the coal mine. "

Dwyer, a computer scientist and director of technical imaging at the Buffalo Neuroimaging Analysis Center, is principal investigator on the NeuroSTREAM software development project. At the American Academy of Neurology meeting in April, he reported preliminary results showing that NeuroSTREAM provided a feasible, accurate, reliable and clinically relevant method of measuring brain atrophy in MS patients, using LVV.

"Usually, you need high-resolution research-quality brain scans to do this," Dwyer explained, "but our software is designed to work with low resolution scans, the type produced by the MRI machines normally found in clinical practice. "

To successfully measure brain atrophy in a way that's meaningful for treatment, Zivadinov explained, what's needed is a normative database through which individual patients can be compared to the population of MS patients. "NeuroSTREAM provides context, because it compares a patient's brain not just to the general population but to other MS patients," said Dwyer. 2016-07-02 19:37 feeds.sciencedaily

Investigating world’s oldest human footprints 35 with software designed to decode crime scenes -- ScienceDaily

The software was developed as part of a Natural Environments Research Council (NERC) Innovation Project awarded to Professor Matthew Bennett and Dr Marcin Budka in 2015 for forensic footprint analysis. They have been developing techniques to enable modern footwear evidence to be captured in three-dimensions and analysed digitally to improve crime scene practice.

Footprints reveal much about the individuals who made them; their body mass, height and their walking speed. "Footprints contain information about the way our ancestors moved," explains Professor Bennett. "The tracks at Laetoli are the oldest in the world and show a line of footprints from our early ancestors, preserved in volcanic ash. They provide a fascinating insight into how early humans walked. The techniques we have been developing for use at modern crime scenes can also reveal something new about these ancient track sites. "

The Laetoli tracks were discovered by Mary Leakey in 1976 and are thought to be around 3.6 million years old. There are two parallel trackways on the site, where two ancient hominins walked across the surface. One of these trackways was obscured when a third person followed the same path. The merged trackway has largely been ignored by scientists over the last 40 years and the fierce debate about the walking style of the track-makers has predominately focused on the undisturbed trackway.

By using the software developed through the NERC Innovation Project Professor Bennett and his colleagues have been able to decouple the tracks of this merged trail and reveal for the first time the shape of the tracks left by this mysterious third track-maker. There is also an intriguing hint of a fourth track-maker at the site.

"We're really pleased that we can use our techniques to capture new data from these extremely old footprints," says Dr Marcin Budka who developed the software used in the study.

"It means that we have effectively doubled the information that the palaeo-anthropological community has available for study of these hominin track-makers," continues Dr Reynolds one of the co-authors of the study.

"As well as making new discoveries about our early ancestors, we can apply this science to help modern society combat crime. By digitising tracks at a crime scene we can preserve, share and study this evidence more easily," says Sarita Morse who helped conceive the original analysis.

For more information, please see the following video: https://www.youtube.com/watch? v=Rl8odSqoDZc 2016-07-02 19:37 feeds.sciencedaily

New technique wipes out 36 unwanted data -- ScienceDaily

To do this, software programs in these systems calculate predictive relationships from massive amounts of data. The systems identify these predictive relationships using advanced algorithms -- a set of rules for solving math problems -- and "training data. " This data is then used to construct the models and features that enable a system to determine the latest best-seller you wish to read or to predict the likelihood of rain next week.

This intricate process means that a piece of raw data often goes through a series of computations in a system. The computations and information derived by the system from that data together form a complex propagation network called the data's "lineage. " The term was coined by Yinzhi Cao, an assistant professor of computer science and engineering, and his colleague, Junfeng Yang of Columbia University, who are pioneering a novel approach to make learning systems forget.

Considering how important this concept is to increasing security and protecting privacy, Cao and Yang believe that easy adoption of forgetting systems will be increasingly in demand. The two researchers have developed a way to do it faster and more effectively than can be done using current methods.

Their concept, called "machine unlearning," is so promising that Cao and Yang have been awarded a four- year, $1.2 million National Science Foundation grant to develop the approach.

"Effective forgetting systems must be able to let users specify the data to forget with different levels of granularity," said Cao, a principal investigator on the project. "These systems must remove the data and undo its effects so that all future operations run as if the data never existed. "

Increasing security and privacy protection

There are a number of reasons why an individual user or service provider might want a system to forget data and its complete lineage. Privacy is one.

After Facebook changed its privacy policy, many users deleted their accounts and the associated data. The iCloud photo hacking incident in 2014 -- in which hundreds of celebrities' private photos were accessed via Apple's cloud services suite -- led to online articles teaching users how to completely delete iOS photos including the backups. New research has revealed that machine learning models for personalized medicine dosing leak patients' genetic markers. Only a small set of statistics on genetics and diseases are enough for hackers to identify specific individuals, despite cloaking mechanisms.

Naturally, users unhappy with these newfound risks want their data, and its influence on the models and statistics, to be completely forgotten.

Security is another reason. Consider anomaly-based intrusion detection systems used to detect malicious software. In order to positively identify an attack, the system must be taught to recognize normal system activity. Therefore the security of these systems hinges on the model of normal behaviors extracted from the training data. By polluting the training data, attackers pollute the model and compromise security. Once the polluted data is identified, the system must completely forget the data and its lineage in order to regain security.

Widely used learning systems such as Google Search are, for the most part, only able to forget a user's raw data -- and not the data's lineage -- upon request. This is problematic for users who wish to ensure that any trace of unwanted data is removed completely, and it is also a challenge for service providers who have strong incentives to fulfill data removal requests and retain customer trust. Service providers will increasingly need to be able to remove data and its lineage completely to comply with laws governing user data privacy, such as the "right to be forgotten" ruling issued in 2014 by the European Union's top court. In October 2014, Google removed more than 170,000 links to comply with the ruling, which affirmed users' right to control what appears when their names are searched. In July 2015, Google said it had received more than a quarter-million such requests.

Breaking down dependencies

Building on work that was presented at a 2015 IEEE Symposium and then published, Cao and Yang's "machine unlearning" method is based on the fact that most learning systems can be converted into a form that can be updated incrementally without costly retraining from scratch.

Their approach introduces a layer of a small number of summations between the learning algorithm and the training data to eliminate dependency on each other. So, the learning algorithms depend only on the summations and not on individual data. Using this method, unlearning a piece of data and its lineage no longer requires rebuilding the models and features that predict relationships between pieces of data. Simply recomputing a small number of summations would remove the data and its lineage completely -- and much faster than through retraining the system from scratch.

Cao believes he and Yang are the first to establish the connection between unlearning and the summation form.

And, it works. Cao and Yang tested their unlearning approach on four diverse, real-world systems: LensKit, an open-source recommendation system; Zozzle, a closed-source JavaScript malware detector; an open-source OSN spam filter; and PJScan, an open- source PDF malware detector.

The success of these initial evaluations has set the stage for the next phases of the project, which include adapting the technique to other systems and creating verifiable machine unlearning to statistically test whether unlearning has indeed repaired a system or completely wiped out unwanted data.

In their paper's introduction, Cao and Yang say that "machine unlearning" could play a key role in enhancing security and privacy and in our economic future:

"We foresee easy adoption of forgetting systems because they benefit both users and service providers. With the flexibility to request that systems forget data, users have more control over their data, so they are more willing to share data with the systems. More data also benefit the service providers, because they have more profit opportunities and fewer legal risks.

"We envision forgetting systems playing a crucial role in emerging data markets where users trade data for money, services, or other data because the mechanism of forgetting enables a user to cleanly cancel a data transaction or rent out the use rights of her data without giving up the ownership. " 2016-07-02 19:37 feeds.sciencedaily

Data scientists launch free tools to analyze online trends, memes: Web-based software 37 provides journalists, researchers and public direct access to sophisticated meme- tracking algorithms -- ScienceDai

The power to explore online social media movements -- from the pop cultural to the political -- with the same algorithmic sophistication as top experts in the field is now available to journalists, researchers and members of the public from a free, user-friendly online software suite released by scientists at Indiana University. The Web-based tools, called the Observatory on Social Media, or "OSoMe" (pronounced "awesome"), provide anyone with an Internet connection the power to analyze online trends, memes and other online bursts of viral activity.

An academic pre-print paper on the tools is available in the open-access journal PeerJ .

"This software and data mark a major goal in our work on Internet memes and trends over the past six years," said Filippo Menczer, director of the Center for Complex Networks and Systems Research and a professor in the IU School of Informatics and Computing. The project is supported by nearly $1 million from the National Science Foundation.

"We are beginning to learn how information spreads in social networks, what causes a meme to go viral and what factors affect the long-term survival of misinformation online," Menczer added. "The observatory provides an easy way to access these insights from a large, multi-year dataset. "

The new tools are:

By plugging #thedress into the system, for example, OSoMe will generate an interactive graph showing connections between both the hashtag and the Twitter users who participated in the debate over a dress whose color -- white and gold or blue and black -- was strangely ambiguous. The results show more people tagged #whiteandgold compared to #blueandblack.

For the Ice Bucket Challenge, another widespread viral phenomenon -- in which people doused themselves in cold water to raise awareness about ALS -- the software generates an interactive graph showing how many people tweeted #icebucketchallenge at specific Twitter users, including celebrities.

One example illustrates a co- occurrence network, in which a single hashtag comprises a "node" with lines showing connections to other related hashtags. The larger the node, the more popular the hashtag. The other example illustrates a diffusion network, in which Twitter users show up as points on a graph, and retweets or mentions show up as connecting lines. The larger a cluster of people tweeting a meme -- or the more lines showing retweets and mentions -- the more viral the topic. OSoMe's social media tools are supported by a growing collection of 70 billion public tweets. The long-term infrastructure to store and maintain the data is provided by the IU Network Science Institute and High Performance Computing group at IU. The system does not provide direct access to the content of these tweets.

The group that manages the infrastructure to store this data is led by Geoffrey Fox, Distinguished Professor in the School of Informatics and Computing. The group whose software analyzes the data is led by Judy Qiu, an associate professor in the school.

"The collective production, consumption and diffusion of information on social media reveals a significant portion of human social life -- and is increasingly regarded as a way to 'sense' social trends," Qiu said. "For the first time, the ability to explore 'big social data' is open not just to individuals with programming skills but everyone as easy-to-use visual tools. "

In addition to pop culture trends, Menczer said, OSoMe provides insight to many other subjects, including social movements or politics, as the online spread of information plays an increasingly important role in modern communication.

The IU researchers who created OSoMe also launched another tool, BotOrNot, in 2014. BotOrNot predicts the likelihood that a Twitter account is operated by a human or a "social bot. " Bots are online bits of code used to create the impression that a real person is tweeting about a given topic, such as a product or a person. The OSoMe project also provides an application program interface, or API, to help other researchers expand upon the tools, or create "mash-ups" that combine its powers with other software or data sources. 2016-07-02 19:37 feeds.sciencedaily

Internet of things: Closing 38 security gaps in internet- connected household -- ScienceDaily

In future, many everyday items will be connected to the Internet and, consequently, become targets of attackers. As all devices run different types of software, supplying protection mechanisms that work for all poses a significant challenge. This is the objective pursued by the Bochum-based project "Leveraging Binary Analysis to Secure the Internet of Things," short Bastion, funded by the European Research Council.

A shared language for all processors

As more often than not, the software running on a device remains the manufacturer's corporate secret, researchers at the Chair for System Security at Ruhr-Universität Bochum do not analyse the original source code, but the binary code of zeros and ones that they can read directly from a device.

However, different devices are equipped with processors with different complexities: while an Intel processor in a computer understands more than 500 commands, a microcontroller in an electronic key is able to process merely 20 commands. An additional problem is that one and the same instruction, for example "add two numbers," is represented as different sequences of zeros and ones in the binary language of two processor types. This renders an automated analysis of many different devices difficult.

In order to perform processor- independent security analyses, Thorsten Holz' team translates the different binary languages into a so called intermediate language. The researchers have already successfully implemented this approach for three processor types named Intel, ARM and MIPS.

Closing security gaps automatically

The researchers then look for security- critical programming errors on the intermediate language level. They intend to automatically close the gaps thus detected. This does not yet work for any software. However, the team has already demonstrated that the method is sound in principle: in 2015, the IT experts identified a security gap in the Internet Explorer and succeeded in closing it automatically.

The method is expected to be completely processor-independent by the time the project is wrapped up in 2020. Integrating protection mechanisms is supposed to work for many different devices, too.

Helping faster than the manufacturers

"Sometimes, it can take a while until security gaps in a device are noticed and fixed by the manufacturers," says Thorsten Holz. This is where the methods developed by his group can help. They protect users from attacks even if security gaps had not yet been officially closed. 2016-07-02 19:37 feeds.sciencedaily

New open source 39 software for high resolution microscopy -- ScienceDaily

Conventional light microscopy can attain only a defined lower resolution limit that is restricted by light diffraction to roughly 1/4 of a micrometre. High resolution fluorescence microscopy makes it possible to obtain images with a resolution markedly below these physical limits. The physicists Stefan Hell, Eric Betzig, and William Moerner were awarded the Nobel Prize in 2014 for developing this important key technology for biomedical research. Currently, one of the ways in which researchers in this domain are trying to attain a better resolution is by using structured illumination. At present, this is one of the most widespread procedures for representing and presenting dynamic processes in living cells. This method achieves a resolution of 100 nanometres with a high frame rate while simultaneously not damaging the specimens during measurement. Such high resolution fluorescence microscopy is also being applied and further developed in the Biomolecular Photonics Group at Bielefeld's Faculty of Physics. For example, it is being used to study the function of the liver or the ways in which the HI virus spreads.

However, scientists cannot use the raw images gained with this method straight away. 'The data obtained with the microscopy method require a very laborious mathematical image reconstruction. Only then do the raw data recorded with the microscope result in a high-resolution image,' explains Professor Dr. Thomas Huser, head of the Biomolecular Photonics Group. Because this stage requires a complicated mathematical procedure that has been accessible for only a few researchers up to now, there was previously no open source software solution that was easily available for all researchers. Huser sees this as a major obstacle to the use and further development of the technology. The software developed in Bielefeld is now filling this gap.

Dr. Marcel Müller from the Biomolecular Photonics Group has managed to produce such universally implementable software. 'Researchers throughout the world are working on building new, faster, and more sensitive microscopes for structured illumination, particularly for the two-dimensional representation of living cells. For the necessary post-processing, they no longer need to develop their own complicated solutions but can use our software directly, and, thanks to its open source availability, they can adjust it to fit their problems,' Müller explains. The software is freely available to the global scientific community as an open source solution, and as soon as its availability was announced, numerous researchers, particularly in Europe and Asia, requested and installed it. 'We have already received a lot of positive feedback,' says Marcel Müller. 'That also reflects how necessary this new development has been.' 2016-07-02 19:37 feeds.sciencedaily

Tool chain for real-time 40 programming -- ScienceDaily

More and more safety- critical embedded electronic solutions are based on rapid, energy- efficient multi-core processors. "Two of the most important requirements of future applications are an increased performance in real time and further reduction of costs without adversely affecting functional safety," Professor Jürgen Becker of the Institute for Information Processing Technology (ITIV) of KIT says, who coordinates ARGO. "For this, multi-core processors have to make available the required performance spectrum at minimum energy consumption in an automated and efficiently programmed manner. "

Multi-core systems are characterized by the accommodation of several processor cores on one chip. The cores work in parallel and, hence, reach a higher speed and performance. Programming of such heterogeneous multi-core processors is very complex. Moreover, the programs have to be tailored precisely to the target hardware and to fulfill the additional real-time requirements. The ARGO EU research project, named after the very quick vessel in Greek mythology, is aimed at significantly facilitating programming by automatic parallelization of model- based applications and code generation. So far, a programmer had to adapt his code, i.e. the instructions for the computer, to the hardware architecture, which is associated with a high expenditure and prevents the code from being transferred to other architectures.

"Under ARGO, a new standardizable tool chain for programmers is being developed. Even without precise knowledge of the complex parallel processor hardware, the programmers can control the process of automatic parallelization in accordance with the requirements. This results in a significant improvement of performance and a reduction of costs," Becker says.

In the future, the ARGO tool chain can be used to manage the complexity of parallelization and adaptation to the target hardware in a largely automated manner with a small expenditure. Under the project, real-time-critical applications in the areas of real-time flight dynamics simulation and real-time image processing are studied and evaluated by way of example. 2016-07-02 19:37 feeds.sciencedaily

RedEye could let your phone see 24-7: Energy- 41 stingy tech could give wearable computers continuous vision -- ScienceDaily

RedEye, new technology from Rice's Efficient Computing Group that was unveiled today at the International Symposium on Computer Architecture (ISCA 2016) conference in Seoul, South Korea, could provide computers with continuous vision -- a first step toward allowing the devices to see what their owners see and keep track of what they need to remember. "The concept is to allow our computers to assist us by showing them what we see throughout the day," said group leader Lin Zhong, professor of electrical and computer engineering at Rice and the co-author of a new study about RedEye. "It would be like having a personal assistant who can remember someone you met, where you met them, what they told you and other specific information like prices, dates and times. "

Zhong said RedEye is an example of the kind of technology the computing industry is developing for use with wearable, hands-free, always-on devices that are designed to support people in their daily lives. The trend, which is sometimes referred to as "pervasive computing" or "ambient intelligence," centers on technology that can recognize and even anticipate what someone needs and provide it right away.

"The pervasive-computing movement foresees devices that are personal assistants, which help us in big and small ways at almost every moment of our lives," Zhong said. "But a key enabler of this technology is equipping our devices to see what we see and hear what we hear. Smell, taste and touch may come later, but vision and sound will be the initial sensory inputs. " Zhong said the bottleneck for continuous vision is energy consumption because today's best smartphone cameras, though relatively inexpensive, are battery killers, especially when they are processing real-time video.

Zhong and former Rice graduate student Robert LiKamWa began studying the problem in the summer of 2012 when they worked at Microsoft Research's Mobility and Networking Research Group in Redmond, Wash., in collaboration with group director and Microsoft Distinguished Scientist Victor Bahl. LiKamWa said the team measured the energy profiles of commercially available, off-the-shelf image sensors and determined that existing technology would need to be about 100 times more energy-efficient for continuous vision to become commercially viable. This was the motivation behind LiKamWa's doctoral thesis, which pursues software and hardware support for efficient computer vision.

In an award-winning paper a year later, LiKamWa, Zhong, Bahl and colleagues showed they could improve the power consumption of off-the-shelf image sensors tenfold simply through software optimization.

"RedEye grew from that because we still needed another tenfold improvement in energy efficiency, and we knew we would need to redesign both the hardware and software to achieve that," LiKamWa said.

He said the energy bottleneck was the conversion of images from analog to digital format.

"Real-world signals are analog, and converting them to digital signals is expensive in terms of energy," he said. "There's a physical limit to how much energy savings you can achieve for that conversion. We decided a better option might be to analyze the signals while they were still analog. "

The main drawback of processing analog signals -- and the reason digital conversion is the standard first step for most image-processing systems today - - is that analog signals are inherently noisy, LiKamWa said. To make RedEye attractive to device makers, the team needed to demonstrate that it could reliably interpret analog signals.

"We needed to show that we could tell a cat from a dog, for instance, or a table from a chair," he said.

Rice graduate student Yunhui Hou and undergraduates Mia Polansky and Yuan Gao were also members of the team, which decided to attack the problem using a combination of the latest techniques from machine learning, system architecture and circuit design. In the case of machine learning, RedEye uses a technique called a "convolutional neural network," an algorithmic structure inspired by the organization of the animal visual cortex. LiKamWa said Hou brought new ideas related to system architecture circuit design based on previous experience working with specialized processors called analog-to-digital converters at Hong Kong University of Science and Technology.

"We bounced ideas off one another regarding architecture and circuit design, and we began to understand the possibilities for doing early processing in order to gather key information in the analog domain," LiKamWa said.

"Conventional systems extract an entire image through the analog-to-digital converter and conduct image processing on the digital file," he said. "If you can shift that processing into the analog domain, then you will have a much smaller data bandwidth that you need to ship through that ADC bottleneck. "

LiKamWa said convolutional neural networks are the state-of-the-art way to perform object recognition, and the combination of these techniques with analog-domain processing presents some unique privacy advantages for RedEye.

"The upshot is that we can recognize objects -- like cats, dogs, keys, phones, computers, faces, etc. -- without actually looking at the image itself," he said. "We're just looking at the analog output from the vision sensor. We have an understanding of what's there without having an actual image. This increases energy efficiency because we can choose to digitize only the images that are worth expending energy to create. It also may help with privacy implications because we can define a set of rules where the system will automatically discard the raw image after it has finished processing. That image would never be recoverable. So, if there are times, places or specific objects a user doesn't want to record -- and doesn't want the system to remember -- we should design mechanisms to ensure that photos of those things are never created in the first place. "

Zhong said research on RedEye is ongoing. He said the team is working on a circuit layout for the RedEye architecture that can be used to test for layout issues, component mismatch, signal crosstalk and other hardware issues. Work is also ongoing to improve performance in low-light environments and other settings with low signal-to- noise ratios, he said. 2016-07-02 19:37 feeds.sciencedaily

New singalong software 42 brings sweet melody to any cacophonous cry -- ScienceDaily

"Many people like singing but they lack the skills to do so," says Minghui Dong, the project leader at A*STAR's Institute for Infocomm Research (I2R). "We want to use our technology to help the average person sing well. "

Speech consists of three key elements: content, prosody and timbre. Content is conveyed using words; prosody, or melody in the case of singing, is expressed through rhythm and pitch; and timbre is the distinctive quality that makes a banjo sound different from a trumpet and one singer's voice different from another's. I2R Speech2Singing works by polishing melody while retaining the original content and timbre of a sound.

Existing technologies that focus on correcting melody try to align off-tune sounds to the closest note on the musical scale or to the exact note in the original score. The former works well for professional singers who may be only slightly out of tune but cannot fix those who are singing drastically off-key or simply reading out loud. The latter is better at correcting discordant tunes but ignores many other aspects of melody such as vibrato and vowel stretching. I2R Speech2Singing uses recordings by professional singers as templates to correct the melody of a singing voice or to convert a speaking voice into a singing one. The software detects the timing of each phonetic sound using speech recognition technology and then stretches or compresses the duration of the signal using voice conversion technology to match the rhythm to that of a professional singer. A speech synthesizer then combines the time-corrected voice with pitch data and background music to produce a beautiful solo.

"When we compared the output with other currently available applications, we realized that our software generated a much better voice quality," says Dr Dong.

Singaporeans were first introduced to the software in 2013 through "Sing for Singapore," part of the official mobile app of National Day Parade 2013. And in 2014, I2R Speech2Singing won the award for best Show & Tell contribution at INTERSPEECH, a major global venue for research on the science and technology of speech communication.

Dr Dong and his team are now developing a solution to quickly add songs into the software so that large- scale song databases can be easily built. 2016-07-02 19:37 feeds.sciencedaily

Detecting hidden malicious ads: Dynamic 43 detection system could protect smartphones from malicious content -- ScienceDaily

"Even reputable apps can lead users to websites hosting malicious content," said Yan Chen, professor of computer science at the Northwestern University McCormick School of Engineering. "No matter what app you use, you are not immune to malicious ads. "

Most people are accustomed to the ads they encounter when interacting with apps on mobile devices. Some pop up between stages in games while others sit quietly in the sidebars. Mostly harmless, ads are a source of income for developers who often offer their apps for free. But as more and more people own smartphones, the number of malicious ads hidden in apps is growing -- tripling in just the past year.

In order to curb attacks from hidden malicious ads, Chen and his team are working to better understand where these ads originate and how they operate. This research has resulted in a dynamic system for Android that detects malicious ads as well as locates and identifies the parties that intentionally or unintentionally allowed them to reach the end user.

Last year, Chen's team used its system to test about one million apps in two months. It found that while the percentage of malicious ads is actually quite small (0.1 percent), the absolute number is still large considering that 2 billion people own smartphones worldwide. Ads that ask the user to download a program are the most dangerous, containing malicious software about 50 percent of the time.

Ad networks could potentially use Chen's system to prevent malicious ads from sneaking into the ad exchange. Ad networks buy space in the app through developers, and then advertisers bid for that space to display their ads. Ad networks use sophisticated algorithms for targeting and inventory management, but there are no tools available to check the safety of each ad.

"It's very hard for the ad networks," Chen said. "They get millions of ads from different sources. Even if they had the resources to check each ad, those ads could change. " The team will present their research, findings, and detection system on Feb. 22, 2016 at the 2016 Network and Distributed System Security Symposium in San Diego, California.

Chen's work culminated from the exploration of the little-studied interface between mobile apps and the Web. Many in-app advertisements take advantage of this interface: when users click on the advertisement within the app, they are led to an outside web page that hosts malicious content. Whether it is an offer to download fake anti-virus software or fake media players or claim free gifts, the content can take many forms to trick the user into downloading software that gathers sensitive information, sends unauthorized and often charged messages, or displays unwanted ads. When Chen's detection software runs, it electronically clicks the ads within apps and follows a chain of links to the final landing page. It then downloads that page's code and completes an analysis to determine whether or not it's malicious. It also uses machine-learning techniques to track the evolving behaviors of malware as it attempts to elude detection.

Currently, Chen's team is testing ten- times more ads with the intention of building a more efficient system. He said their goal is to diagnose and detect malicious ads even faster. As people put more and more private information into their phones, attackers are motivated to pump more malicious ads into the market. Chen wants to give ad networks and users the tools to be ready. "Attackers follow the money," Chen said. "More people are putting their credit card and banking information into their phones for mobile payment options. The smartphone has become a treasure for attackers, so they are investing heavily in compromising them. That means we will see more and more malicious ads and malware. " 2016-07-02 19:37 feeds.sciencedaily

Trawling the net to target 44 internet trolls -- ScienceDaily

The software, known as FireAnt (Filter, Identify, Report, and Export Analysis Tool), can speedily download, devour, and discard large collections of online data leaving relevant and important information for further investigation, all at the touch of a button.

Members of the University's Centre for Corpus Approaches to Social Science (CASS) led by Dr Claire Hardaker have produced this cutting-edge tool so that they can pinpoint offenders on busy social networks such as Twitter.

FireAnt was built as part of an international collaboration with corpus linguist and software expert Laurence Anthony, a professor at Waseda University, Japan and honorary research fellow at CASS.

While initially designed to download and handle data from Twitter, FireAnt can analyse texts from almost any online source, including sites such as Facebook and Google+.

"We have developed a software tool designed to enhance the signal and suppress the noise in large datasets," explains Dr Hardaker.

"It will allow the ordinary user to download Twitter data for their own analyses. Once this is collected, FireAnt then becomes an intelligent filter that discards unwanted messages and leaves behind data that can provide all-important answers. The software, which we offer as a free resource for those interested in undertaking linguistic analysis of online data, uses practical filters such as user- name, location, time, and content.

"The filtered information can then be presented as raw data, a time-series graph, a geographical map, or even a visualization of the network interactions. Users don't need to know any programming to use the tool -- everything can be done at the push of a button. "

FireAnt is designed to reduce potentially millions of messages down to a sample that contains only what the user wants to see, such as every tweet containing the word 'British', sent in the middle of the night, from users whose bio contains the word 'patriotic'.

Dr Hardaker, a lecturer in forensic corpus linguistics, began an Economic and Social Research Council-funded project researching abusive behaviour on Twitter in December 2013. The project quickly demonstrated that, while tackling anti-social online behaviour is of key importance, sites like Twitter produce data at such high volumes that simply trying to identify relevant messages amongst all the irrelevant ones is a huge challenge in itself. Less than a year into the project, Dr Hardaker and her team were invited to Twitter's London headquarters to present project findings to the Crown Prosecution Service and Twitter itself. The research subsequently influenced Twitter to update its policy on abusive online behaviour.

The interest from the Crown Prosecution Service and the police encouraged Dr Hardaker to work with fellow corpus linguist, Professor Laurence Anthony to turn the research into a tool that could both collect online data, and then filter out the 'noise' from millions of messages, thereby enhancing the useful signals that can lead to the identification of accounts, texts, and behaviours of interest.

Dr Hardaker explained that the Government is trying to understand how social networks are involved in issues ranging from child-grooming and human-trafficking to fraud and radicalization. A key aspect of Dr Hardaker's work is a focus on the process of escalation from online messages that may start out as simply unpleasant or annoying, but that intensify to extreme, illegal behaviours that could even turn into physical, offline violence. In this respect, FireAnt can offer the opportunity to pinpoint high-risk individuals and networks that may go on to be a threat, whether to themselves or others.

Dr Claire Hardaker specialises in research into online aggression, manipulation and deception. She is currently working on projects that involve analysing live online social networks for the escalation of abusive behaviour, and the use of the Internet in transnational crime such as human trafficking and modern slavery.

FireAnt is free to download from: http://www.laurenceanthony.net/software/fireant 2016-07-02 19:37 feeds.sciencedaily

Finding the next new tech material: The 45 computational hunt for the weird and unusual -- ScienceDaily

"It's the weird or unusual structure and behaviors of a material that makes it useful for a technological application," said Ames Laboratory Chief Research Officer Duane Johnson. "So the questions become: How do we find those unusual structures and behaviors? How do we understand exactly how they happen? Better yet, how do we control them so we can use them? "

The answer lies in fully understanding what scientists call solid-to-solid phase transformations, changes of a structure of one solid phase into another under stress, heat, magnetic field, or other fields. School kids learn, for example, that water (liquid phase) transforms when heated to steam (gas phase). But a solid, like a metallic alloy, can have various structures exhibiting order or disorder depending on changes in temperature and pressure, still remain a solid, and display key changes in properties like shape memory, magnetism, or energy conversion.

"Those solid-to-solid transformations are behind a lot of the special features we like and want in materials," explained Johnson, who heads up the project, called Mapping and Manipulating Materials Phase Transformation Pathways. "They are behind things that are already familiar to us, like the expandable stents used in heart surgery and bendable eyeglass frames; but they are also for uses we're still exploring, like energy-harvesting technologies and magnetic cooling. "

The computer codes are an advancement and adaptation of new and existing software, led in development by Johnson. One such code, called MECCA (Multiple- scattering Electronic-structure Code for Complex Alloys), is uniquely designed to tackle the complex problem of analyzing and predicting the atomic structural changes and behaviors of solids as they undergo phase transformations, and reveal why they do what they do to permit its control.

The program will assist and inform other ongoing materials research projects at Ames Laboratory, including ones with experimentalists on the hunt for new magnetic and high-entropy alloys, thermoelectrics, rare-earth magnets, and iron-arsenide superconductors.

"This theoretical method will become a key tool to guide the experimentalists to the compositions most likely to have unique capabilities, and to learn how to manipulate and control them for new applications," Johnson said. 2016-07-02 19:37 feeds.sciencedaily

Machine learning as good 46 as humans' in cancer surveillance, study shows -- ScienceDaily

Every state in the United States requires cancer cases to be reported to statewide cancer registries for disease tracking, identification of at- risk populations, and recognition of unusual trends or clusters. Typically, however, busy health care providers submit cancer reports to equally busy public health departments months into the course of a patient's treatment rather than at the time of initial diagnosis.

This information can be difficult for health officials to interpret, which can further delay health department action, when action is needed. The Regenstrief Institute and IU researchers have demonstrated that machine learning can greatly facilitate the process, by automatically and quickly extracting crucial meaning from plaintext, also known as free-text, pathology reports, and using them for decision-making.

"Towards Better Public Health Reporting Using Existing Off the Shelf Approaches: A Comparison of Alternative Cancer Detection Approaches Using Plaintext Medical Data and Non-dictionary Based Feature Selection" is published in the April 2016 issue of the Journal of Biomedical Informatics .

"We think that its no longer necessary for humans to spend time reviewing text reports to determine if cancer is present or not," said study senior author Shaun Grannis, M. D., M. S., interim director of the Regenstrief Center of Biomedical Informatics. "We have come to the point in time that technology can handle this. A human's time is better spent helping other humans by providing them with better clinical care. "

"A lot of the work that we will be doing in informatics in the next few years will be focused on how we can benefit from machine learning and artificial intelligence. Everything -- physician practices, health care systems, health information exchanges, insurers, as well as public health departments -- are awash in oceans of data. How can we hope to make sense of this deluge of data? Humans can't do it -- but computers can. "

Dr. Grannis, a Regenstrief Institute investigator and an associate professor of family medicine at the IU School of Medicine, is the architect of the Regenstrief syndromic surveillance detector for communicable diseases and led the technical implementation of Indiana's Public Health Emergency Surveillance System -- one of the nation's largest. Studies over the past decade have shown that this system detects outbreaks of communicable diseases seven to nine days earlier and finds four times as many cases as human reporting while providing more complete data.

"What's also interesting is that our efforts show significant potential for use in underserved nations, where a majority of clinical data is collected in the form of unstructured free text," said study first author Suranga N. Kasthurirathne, a doctoral student at School of Informatics and Computing at IUPUI. "Also, in addition to cancer detection, our approach can be adopted for a wide range of other conditions as well. "

The researchers sampled 7,000 free- text pathology reports from over 30 hospitals that participate in the Indiana Health Information Exchange and used open source tools, classification algorithms, and varying feature selection approaches to predict if a report was positive or negative for cancer. The results indicated that a fully automated review yielded results similar or better than those of trained human reviewers, saving both time and money.

"Machine learning can now support ideas and concepts that we have been aware of for decades, such as a basic understanding of medical terms," said Dr. Grannis. "We found that artificial intelligence was as least as accurate as humans in identifying cancer cases from free-text clinical data. For example the computer 'learned' that the word 'sheet' or 'sheets' signified cancer as 'sheet' or 'sheets of cells' are used in pathology reports to indicate malignancy.

"This is not an advance in ideas, it's a major infrastructure advance -- we have the technology, we have the data, we have the software from which we saw accurate, rapid review of vast amounts of data without human oversight or supervision. " 2016-07-02 19:37 feeds.sciencedaily

47 Automatic debugging of software -- ScienceDaily Computer programs often contain defects, or bugs, that need to be found and repaired. This manual "debugging" usually requires valuable time and resources. To help developers debug more efficiently, automated debugging solutions have been proposed. One approach goes through information available in bug reports. Another goes through information collected by running a set of test cases. Until now, explains David Lo from Singapore Management University's (SMU) School of Information Systems, there has been a "missing link" that prevents these information gathering threads from being combined.

Dr Lo, together with colleagues from SMU, has developed an automated debugging approach called Adaptive Multimodal Bug Localisation (AML). AML gleans debugging hints from both bug reports and test cases, and then performs a statistical analysis to pinpoint program elements that are likely to contain bugs.

"While most past studies only demonstrate the applicability of similar solutions for small programs and 'artificial bugs' [bugs that are intentionally inserted into a program for testing purposes], our approach can automate the debugging process for many real bugs that impact large programs," Dr Lo explains. AML has been successfully evaluated on programs with more than 300,000 lines of code. By automatically identifying buggy code, developers can save time and redirect their debugging effort to designing new software features for clients. Dr Lo and his colleagues are now planning to contact several industry partners to take AML one step closer toward integration as a software development tool.

Dr Lo's future plans involve developing an Internet-scale software analytics solution. This would involve analysing massive amounts of data that passively exist in countless repositories on the Internet in order to transform manual, pain-staking and error-prone software engineering tasks into automated activities that can be performed efficiently and reliably. This is done, says Dr Lo, by harvesting the wisdom of the masses -- accumulated through years of effort by thousands of software developers -- hidden in these passive, distributed and diversified data sources. 2016-07-02 19:37 feeds.sciencedaily

Self-learning arm 48 controlled by thought -- ScienceDaily

According to the developers -- fellows at the Laboratory of Medical Instrument-Making, the Institute of Non-Destructive Testing -- Mikhail Grigoriev, Nikita Turushev and Evgeniy Tarakanets, the manufacturing of human prosthetic limbs has been available for a few decades. But to make them functional, translate them into a full replacement of a lost body part is still impossible.

"To date, there are quite available traction prostheses. Their motions are carried out by means of traction belts which are superimposed from the repaired arm across the back as loop around of the healthy shoulder. That is the prosthesis performs by motions of a healthy arm. The drawbacks of this type are in need of unnatural body motions to control it," said Nikita Turushev.

The algorithm being developed by the polytechnicers will save people from having to wear traction belts. Sensors on the prosthesis will pick up myoelectric signals. Human brain sends signals to muscles making them to perform the necessary actions. The system will analyze commands coming to the healthy arm part and "guess" what motion the prosthesis should do.

"Initially, software will be universal, but we will adapt it to each specific artificial arm. Further, a machine learning algorithm will copy its host wearing the prosthesis: to fix myoelectric signals and choose required motions," says Mikhail Grigoriev.

Now the young scientists are "teaching" the algorithm different signals and their meanings. Initially, they will examine at least 150 people with healthy limbs. "Remembered" the signals and following them meanings the software will produce them at the stage of medical trials.

The polytechnicers gained the grant of the Russian Foundation for Basic Research on the development in 2015. In two years they should present the prosthesis prototype and software for its operation support. 2016-07-02 19:37 feeds.sciencedaily

Zip software can detect the quantum-classical boundary: Compression 49 of experimental data reveals the presence of quantum correlations -- ScienceDaily

"We found a new way to see a difference between the quantum universe and a classical one, using nothing more complex than a compression program," says Dagomir Kaszlikowski, a Principal Investigator at the Centre for Quantum Technologies (CQT) at the National University of Singapore.

Kaszlikowski worked with other researchers from CQT and collaborators at the Jagiellonian University and Adam Mickiewicz University in Poland to show that compression software, applied to experimental data, can reveal when a system crosses the boundary of our classical picture of the Universe into the quantum realm. The work is published in the March issue of New Journal of Physics.

In particular, the technique detects evidence of quantum entanglement between two particles. Entangled particles coordinate their behaviour in ways that cannot be explained by signals sent between them or properties decided in advance. This phenomenon has shown up in many experiments already, but the new approach does without an assumption that is usually made in the measurements.

"It may sound trivial to weaken an assumption, but this one is at the core of how we think about quantum physics," says co-author Christian Kurtsiefer at CQT. The relaxed assumption is that particles measured in an experiment are independent and identically distributed -- or i.i.d.

Experiments are typically performed on pairs of entangled particles, such as pairs of photons. Measure one of the light particles and you get results that seems random. The photon may have a 50:50 chance of having a polarization that points up or down, for example. The entanglement shows up when you measure the other photon of the pair: you'll get a matching result.

A mathematical relation known as Bell's theorem shows that quantum physics allows matching results with greater probability than is possible with classical physics. This is what previous experiments have tested. But the theorem is derived for just one pair of particles, whereas scientists must work out the probabilities statistically, by measuring many pairs. The situations are equivalent only as long as each particle-pair is identical and independent of every other one -- the i.i.d. assumption.

With the new technique, the measurements are carried out the same way but the results are analyzed differently. Instead of converting the results into probabilities, the raw data (in the forms of lists of 1s and 0s) is used directly as input into compression software.

Compression algorithms work by identifying patterns in the data and encoding them in a more efficient way. When applied to data from the experiment, they effectively detect the correlations resulting from quantum entanglement.

In the theoretical part of the work, Kaszlikowski and his collaborators worked out a relation akin to Bell's theorem that's based on the 'normalized compression difference' between subsets of the data. If the universe is classical, this quantity must stay less than zero. Quantum physics, they predicted, would allow it to reach 0.24. The theorists teamed up with Kurtsiefer's experimental group to test the idea.

First the team collected data from measurements on thousands of entangled photons. Then they used an open-source compression algorithm known as the Lempel-Ziv-Markov chain algorithm (used in the popular 7-zip archiver) to calculate the normalized compression differences. They find a value exceeding zero -- 0.0494 ± 0.0076 -- proving their system had crossed the classical-quantum boundary. The value is less than the maximum predicted because the compression does not reach the theoretical limit and the quantum states cannot be generated and detected perfectly.

It's not yet clear whether the new technique will find practical applications, but the researchers see their 'algorithmic' approach to the problem fitting into a bigger picture of how to think about physics. They derived their relation by considering correlations between particles produced by an algorithm fed to two computing machines. "There is a trend to look at physical systems and processes as programs run on a computer made of the constituents of our universe," write the authors. This work presents an "explicit, experimentally testable example. " 2016-07-02 19:37 feeds.sciencedaily

No need for supercomputers: Russian scientists suggest a PC to 50 solve complex problems tens of times faster than with massive supercomputers -- ScienceDaily

Senior researchers Vladimir Pomerantcev and Olga Rubtsova, working under the guidance of Professor Vladimir Kukulin (SINP MSU), were able to use on an ordinary desktop PC with GPU to solve complicated integral equations of quantum mechanics -- previously solved only with the powerful, expensive supercomputers. According to Vladimir Kukulin, the personal computer does the job much faster: in 15 minutes it is doing the work requiring normally 2-3 days of the supercomputer time.

The equations in question were formulated in the '60s by the Russian mathematician Ludwig Faddeev. The equations describe the scattering of a few quantum particles, i.e., represent a quantum mechanical analog of the Newtonian theory of the three body systems. As the result, the whole field of quantum mechanics called "physics of few-body systems" appeared soon after this. This area poses a great interest to scientists engaged in quantum mechanics, nuclear and atomic physics and the theory of scattering. For several decades after the pioneering work of Faddeev one of their main purposes was to learn to solve these complicated equations. However, due to the incredible complexity of the calculations in the case of fully realistic interactions between a system's particles stood out of the researchers' reach for a long time, until the supercomputers appeared.

The situation changed dramatically after the group of SINP decided to use one of the new Nvidia GPs designed for use in game consoles on their personal computer. According to one of the authors Vladimir Kukulin, Head of Laboratory of Nuclear Theory, the processor was not the most expensive, of those that you can buy for $300-500. The main problem in solving the scattering equations of multiple quantum particles was the calculation of the integral kernel -- a huge two- dimensional table, consisting of tens or hundreds of thousands of rows and columns, with each element of such a huge matrix being the result of extremely complex calculations. But this table appeared to look like a monitor screen with tens of billions of pixels, and with a good GPU it was quite possible to calculate all of these. Using the software developed in Nvidia and having written their own programs, the researchers split their calculations on the many thousands of streams and were able to solve the problem brilliantly.

"We reached the speed we couldn't even dream of," Vladimir Kukulin said. "The program computes 260 million of complex double integrals on a desktop computer within three seconds only. No comparison with supercomputers! My colleague from the University of Bochum in Germany (recently deceased, mournfully), whose lab did the same, carried out the calculations by one of the largest supercomputers in Germany with the famous blue gene architecture that is actually very expensive. And what his group is seeking for two or three days, we do in 15 minutes without spending a dime. "

The most amazing thing is that the desired quality of graphics processors and a huge amount of software to them exist for ten years already, but no one used them for such calculations, preferring supercomputers. Anyway, our physicists surprised their Western counterparts pretty much. "This work, in our opinion, opens up completely new ways to analyze nuclear and resonance chemical reactions," says Vladimir Kukulin. "It can also be very useful for solving a large number of computing tasks in plasma physics, electrodynamics, geophysics, medicine and many other areas of science. We want to organize a kind of training course, where researchers from various scientific areas of peripheral universities that do not have access to supercomputers could learn to do on their PCs the same thing that we do. " 2016-07-02 19:37 feeds.sciencedaily

Nations ranked on their vulnerability to cyberattacks: United 51 States ranked 11th safest of 44 nations studied, highlighting critical vulnerabilities -- ScienceDaily

Data-mining experts from the University of Maryland and Virginia Tech recently co-authored a book that ranked the vulnerability of 44 nations to cyberattacks. Lead author V. S. Subrahmanian discussed this research on Wednesday, March 9 at a panel discussion hosted by the Foundation for Defense of Democracies in Washington, D. C.

The United States ranked 11th safest, while several Scandinavian countries (Denmark, Norway and Finland) ranked the safest. China, India, Russia, Saudi Arabia and South Korea ranked among the most vulnerable.

"Our goal was to characterize how vulnerable different countries were, identify their current cybersecurity policies and determine how those policies might need to change in response to this new information," said Subrahmanian, a UMD professor of computer science with an appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS).

The book's authors conducted a two- year study that analyzed more than 20 billion automatically generated reports, collected from 4 million machines per year worldwide. The researchers based their rankings, in part, on the number of machines attacked in a given country and the number of times each machine was attacked.

Machines using Symantec anti-virus software automatically generated these reports, but only when a machine's user opted in to provide the data.

Trojans, followed by viruses and worms, posed the principal threats to machines in the United States. However, misleading software (i.e., fake anti-virus programs and disk cleanup utilities) is far more prevalent in the U. S. compared with other nations that have a similar gross domestic product. These results suggest that U. S. efforts to reduce cyberthreats should focus on education to recognize and avoid misleading software.

In a foreword to the book, Isaac Ben- Israel, chair of the Israeli Space Agency and former head of that nation's National Cyber Bureau, wrote: "People- -even experts--often have gross misconceptions about the relative vulnerability [to cyber attack] of certain countries. The authors of this book succeed in empirically refuting many of those wrong beliefs. "

The book's findings include economic and educational data gathered by UMD's Center for Digital International Government, for which Subrahmanian serves as director. The researchers integrated all of the data to help shape specific policy recommendations for each of the countries studied, including strategic investments in education, research and public-private partnerships.

Subrahmanian's co-authors on the book are Michael Ovelgönne, a former UMIACS postdoctoral researcher; Tudor Dumitras, an assistant professor of electrical and computer engineering in the Maryland Cybersecurity Center; and B. Aditya Prakash, an assistant professor of computer science at Virginia Tech.

A related research paper on forecasting the spread of malware in 40 countries--containing much of the same data used for the book--was presented at the 9th ACM International Conference of Web Search and Data Mining in February 2016.

Another paper, accepted for publication in the journal ACM Transactions on Intelligent Systems and Technology , looked at the human aspect of cyberattacks--for example, why some people's online behavior makes them more vulnerable to malware that masquerades as legitimate software.

The book, "The Global Cyber Vulnerability Report," V. S. Subrahmanian, Michael Ovelgonne, Tudor Dumitras and B. Aditya Prakash, was published by Springer in December 2015.

The research paper, "Ensemble Models for Data-Driven Prediction of Malware Infections," C. Kang, N. Park, B. A. Prakash, E. Serra, and V. S. Subrahmanian, appears in Proceedings of the 9th ACM International Conf. on Web Science and Data Mining (WSDM 2016), San Francisco, February 2016.

The research paper, "Understanding the Relationship between Human Behavior and Susceptibility to Cyber- Attacks: A Data-Driven Approach," M. Ovelgönne, T. Dumitras, A. Prakash, V. S. Subrahmanian, and B. Wang, was accepted for publication in ACM Transactions on Intelligent Systems & Technology in February 2016. 2016-07-02 19:37 feeds.sciencedaily Total 51 articles. Generated at 2016-07-03 00:01