Announcement

49 articles, 2016-05-03 06:03 1 News

(1.00/2) News for the Open Source Professional 2016-05-03 06:03 480Bytes www.linux.com

2 Bitcoin creator Satoshi Nakamoto revealed to be Australian entrepreneur Craig Wright For some time, the person who created the cryptocurrency Bitcoin has been known as (1.00/2) Satoshi Nakamoto. We know that was nothing more than a pseudonym, and now Australian entrepreneur Craig Wright has revealed that he is the man behind the mask. For some time, the... 2016-05-02 11:20 2KB feeds.betanews.com 3 cloud strength highlights third quarter results REDMOND, Wash. — April 21, 2016 — Microsoft Corp... 2016-05-03 03:14 8KB news.microsoft.com 4 ACM 2015 Technical Awards Programming book reviews, programming tutorials,programming news, C#, Ruby, Python,C, C++, PHP, , Computer book reviews, computer history, programming history, joomla, theory, spreadsheets and more. 2016-05-03 04:16 4KB www.i-programmer.info 5 FileHippo News - powered by FeedBurner If there’s one universal truth about healthcare, it’s that there’s simply not enough access to it, at least not affordable access. So... 2016-05-03 01:15 21KB feeds2.feedburner.com 6 Microsoft Limits Cortana To Bing Search, Edge Browser Microsoft is restricting Cortana to work only with Bing and Edge, and eliminating the use of third-party browsers and search engines for its digital assistant. 2016-05-03 03:13 4KB www.informationweek.com 7 Google's one-handed keyboard just solved your big Android phone problem Design tweaks, customization options, and more 2016-05-02 20:23 2KB www.techradar.com 8 Insurance brokerage is broken While many factors are driving the tipping point in the online distribution of insurance, the thread that ties it all together is simple: changing.. 2016-05-02 20:16 16KB feedproxy.google.com 9 Tesla’s bioweapon mode is a stroke of genius for developing markets Tesla today shared details of how effective its particulate filters are. Spoiler alert: They are so good, not only do they clean up the air inside the car,.. 2016-05-02 20:16 5KB feedproxy.google.com 10 Michael Dell reveals new branding scheme for the Dell-EMC conglomeration Michael Dell today revealed the new names, and yes we are talking multiple names, for the artist formerly known as the Dell-EMC deal. EMC will be.. 2016-05-02 20:16 2KB feedproxy.google.com 11 SoundCloud turns on ads and Go premium subs in the UK and Ireland Now that SoundCloud has inked licensing deals with all the big music rights holders, the startup is wasting no time rolling out subscription services and.. 2016-05-02 20:16 4KB feedproxy.google.com

12 High schooler’s 3D printed ‘mini-brain’ bioreactor accelerates Zika research What are you planning on doing this summer? Probably not designing a revolutionary new bioreactor in which a thousand "mini-brains" can undergo testing... 2016-05-02 20:16 2KB feedproxy.google.com 13 Tastemates helps you find people who like what you like If you've ever forged a connection with someone because you had the same favorite movie or TV show or book, a new startup called Tastemates has built a social.. 2016-05-02 20:16 3KB feedproxy.google.com 14 Shopping Quizzes is a quiz-based recommendation engine for e-commerce sites A key tenet of e-commerce is the recommendation engine. If implemented correctly, it can be a major sales driver for online retailers. However, most.. 2016-05-02 20:16 2KB feedproxy.google.com 15 Shyp rolls out its new packaging pricing model to all customers Shyp has finished rolling out a new packaging pricing model to all its customers that includes variable shipment pricing based on the packaging that the.. 2016-05-02 20:16 3KB feedproxy.google.com 16 Bessemer Venture Partners Byron Deeter on success in SAAS Efficient growth has always been rewarded by the market, but as cash becomes more constricted, particularly at the late stage, this efficiency will become.. 2016-05-02 20:16 2KB feedproxy.google.com 17 Videorama makes editing mobile video actually fun Smartphone users shoot a lot of video, but turning those videos into something special still takes a lot of work. A newly launched app called Videorama aims.. 2016-05-02 20:16 3KB feedproxy.google.com 18 Venture investments in new manufacturing technologies could reshape American industry A wave of venture investment into new manufacturing startups looks set to transform American manufacturing. While the foundations for these companies may have.. 2016-05-02 20:16 6KB feedproxy.google.com 19 Chegg acquires Imagine Easy Solutions, the company behind EasyBib, BibMe and Citation Machine The online textbook service Chegg today announced that it has acquired Imagine Easy Solutions for $42 million. Imagine Easy is the company behind online.. 2016-05-02 20:16 3KB feedproxy.google.com 20 Capital One: Think Like A Designer, Work Like A Startup The creation of Capital One Wallet is an example of how a large financial services IT organization can move like a startup and think like a design firm, transforming business expectations in the process. Their work earned them the No. 1 spot in the 2016 InformationWeek Elite 100. 2016-05-02 20:07 5KB www.informationweek.com 21 The Weather Company Brings Together Forecasting And IoT The Weather Company estimates that weather is perhaps the single largest external factor affecting business performance, to the tune of nearly $1 trillion lost annually in the US alone. Combining weather data with business data can improve decision-making for a wide range of companies. The... 2016-05-02 20:04 9KB www.informationweek.com

22 Horizon Prioritizes Data And Patient Experience Horizon Blue Cross Blue Shield Of New Jersey implemented a fee-for-value healthcare delivery model that uses new technologies to gather data and improve member experience. More than 80 business processes were created or modified, transforming systems for enrollment, claims processing, billing, customer service... 2016-05-02 20:01 10KB www.informationweek.com 23 Vue.js lead: Our JavaScript framework is faster than React Version 2.0 of the Web interface dev tool focuses on DOM improvements 2016-05-02 20:00 3KB www.computerworld.com.au 24 Penn Medicine: Using Data To Save Patient Lives Penn Signals is a system that uses existing data from electronic health records to perform real time predictive analysis of heart failure patients. The goal? Penn Medicine wanted to place patients in proper risk groups and assign them to cardiology resources in order to get them the best... 2016-05-02 19:57 9KB www.informationweek.com 25 FedEx Services Eases The Pain Of Customs Clearance Disparate internal systems and a complex customs environment were slowing down the import/export process for business customers. So FedEx Services launched the Clearance Customer Profile app to help businesses overcome customs clearance hurdles. The company's efforts earned it the No. 5 spot in the 2016 InformationWeek Elite 100. 2016-05-02 19:54 7KB www.informationweek.com 26 NYC tech giants band together to create industry lobbying group Some of the nation’s largest technology companies are locking arms to create a New York City-based lobbying group, one that AOL CEO Tim Armstrong and venture capitalist Fred Wilson believe is now necessary to better represent the city’s tech community… 2016-05-02 18:45 2KB www.techspot.com 27 The Elite 100: Celebrating The People Who Make IT Happen To talk about technology transforming business only tells part of the story, though. At the end of the day, it's the people behind the technology that are truly the agents of change. Join us as we celebrate their work in the 2016 InformationWeek Elite 100 2016-05-02 18:31 2KB www.informationweek.com 28 Industry Spotlight: Build performance tests into code While many shops have moved to Continuous Integration and Continuous Delivery, they also need Continuous Testing capabilities to achieve a continuous ecosystem 2016-05-02 17:45 8KB sdtimes.com 29 Intel 'Kaby Lake' Core i7-7700K CPU details leaked in benchmark results Intel’s processor roadmap has looked vastly different as of late compared to say, several years ago. Gone is the normalcy, replaced with odd occurrences like Broadwell’s unusually short run before being replaced by Skylake. 2016-05-02 17:45 3KB www.techspot.com 30 Industry Spotlight: Beautiful mobile apps happen when design and development are equals Kony Visualizer 7.0 aims to democratize the process of building powerful, diverse user experiences for tablets, smartphones and more 2016-05-02 17:39 8KB sdtimes.com 31 Chipmaker Marvell appoints Richard Hill chairman May 2- Chipmaker Marvell Technology Group Ltd has appointed Richard Hill its chairman, as part of an agreement it reached with activist hedge fund Starboard Value LP last week. Marvell had reached an agreement with Starboard to add to its board three independent directors nominated by the... 2016-05-02 17:23 1KB www.cnbc.com

32 Nvidia settles patent dispute with Samsung ahead of ITC ruling Nvidia and Samsung have agreed to settle all pending intellectual property litigation between the two in U. S. district courts, the U. S. International Trade Commission and the U. S. Patent Office. The move came just hours before the ITC was slated to… 2016-05-02 17:00 2KB www.techspot.com 33 'Uncharted 4' multiplayer maps and modes will be free, all other paid content can be earned in-game Uncharted 4: A Thief’s End is set to drop on May 10 exclusively for the PlayStation 4. Like most modern games, it’ll be supported by DLC post-release although in a rather twist, developer Naughty Dog revealed on Monday that… 2016-05-02 16:15 1KB www.techspot.com 34 The next Battlefield game will be unveiled this Friday The Battlefield versus Call of Duty rivalry has existed for many years. This week, the franchises are competing yet again, as both first-person shooters reveal details about their next installments. 2016-05-02 15:30 2KB www.techspot.com 35 Micro Focus announces completion of Serena Software acquisition Micro Focus completed the acquisition of Serena Software today, and both companies will work together to build DevOps solutions 2016-05-02 15:07 1KB sdtimes.com 36 Nvidia's 365.10 drivers are optimized for Battleborn, Overwatch, Paragon and Forza 6: Apex Nvidia has a new set of Game Ready drivers out today. The GeForce Game Ready 365.10 WHQL drivers are said to be optimized for several new and upcoming games including Forza Motorsport 6: Apex. 2016-05-02 14:45 2KB www.techspot.com 37 C#/XAML for HTML5 beta 8 released CSHTML5 has been released with more than 25 highly requested features from developers 2016-05-02 14:30 3KB sdtimes.com 38 U. S. uncovers $20 million H-1B fraud scheme Prison time is possible in visa-for-sale case, say feds 2016-05-02 14:28 2KB www.infoworld.com 39 Amazon bolsters voice-based platform Alexa with investment in TrackR The company's investment in TrackR comes through the $100M "Alexa Fund," which supports technologies that broaden Alexa's abilities. 2016-05-02 13:40 2KB www.cnbc.com 40 New service helps small businesses sync and share files Enterprises of all sizes have become increasingly reliant on file syncing and sharing services. But for smaller companies business focused services can be expensive, leaving them reliant on free consumer services that offer limited space and functions. Enterprises of all sizes have become increasingly reliant on file... 2016-05-02 13:04 2KB feeds.betanews.com 41 Get 50% off Scrivener for Windows via Deals Today via Neowin Deals, you can save half off the price of Scrivener for Windows - the award-winning writing app used by many New York Times best-selling authors - but only for a limited time! 2016-05-02 13:04 2KB feedproxy.google.com 42 How to change your MAC address in Windows 10 Every network adapter has a MAC address, a unique value used to identify devices at the physical network layer. Normally this address stays the same forever, which may allow networks to recognize and track you. Every network adapter has a MAC address, a unique value... 2016-05-02 12:32 2KB feeds.betanews.com

43 Australian Parliament considers implementing electronic voting for MPs After a similar recommendation failed to progress almost 20 years ago, a Lower House committee of Australian Parliament has dusted off a proposal to allow MPs to cast votes using smart cards. 2016-05-02 10:08 2KB feedproxy.google.com 44 Facebook Messenger to gain privacy-enhancing self- destructing messages With the ongoing debate about privacy and encryption, the rollout of end-to-end encryption to Facebook-owned WhatsApp came as little surprise. Now Facebook Messenger is set to gain a couple of privacy-enhancing features including self-destructing messages. With the ongoing debate about privacy and... 2016-05-02 10:03 2KB feeds.betanews.com 45 Azure VMs with real GPUs will deliver a massive power boost N-Series-powered virtual machines should be available this autumn 2016-05-02 09:10 3KB www.techradar.com 46 Going beyond the code: Things developers should care about In order to be successful, a developer’s job needs to involve more than just programming 2016-05-02 09:00 14KB sdtimes.com 47 Bing iOS app update allows for searching via image Searching for that particular image on the go has become easier for iOS users. Bing's iOS app update allows for users to search the web using existing or new images. 2016-05-02 08:12 1KB feedproxy.google.com 48 FBI Hacking Authority Expanded By Supreme Court With the US Supreme Court's approval, warrants to search and seize digital data will be able to authorize government hacking anywhere. 2016-05-02 08:06 4KB www.informationweek.com 49 Windows Store Weekly: Facebook delivers on its promise for Windows 10 As we inch closer to the release of the Windows 10 Anniversary Update, we can't help but get excited to see Facebook's most popular apps arrive in the Windows Store, and they're not the only ones. 2016-05-02 07:56 9KB feedproxy.google.com Articles

49 articles, 2016-05-03 06:03

1 News (1.00/2) Brought to you by The Linux Foundation is a non-profit consortium enabling collaboration and innovation through an open source development model. Learn More © 2016 The Linux Foundation 2016-05-03 06:03 www.linux

2 Bitcoin creator Satoshi Nakamoto revealed to be Australian entrepreneur Craig Wright (1.00/2) For some time, the person who created the cryptocurrency Bitcoin has been known as Satoshi Nakamoto. We know that was nothing more than a pseudonym, and now Australian entrepreneur Craig Wright has revealed that he is the man behind the mask. It brings to an end years of speculation about the inventor's real identity, and Wright has been able to provide technical proof to the BBC to back up his claims. The IT and security consultant's home was raided in recent days as part of an investigation by the Australian Tax Office, and documents leaked from the inquiries pointed towards Wright. He has now confirmed his identity. To verify his claims, in a meeting with the BBC Wright used cryptographic keys known to be linked to Bitcoins mined by Satoshi Nakamoto to sign messages. His identity has also been confirmed by other people from the Bitcoin development team and community. While not everyone has been convinced by the unmasking, Gavin Andresen from Bitcoin Foundation writes in a blog post that "I believe Craig Steven Wright is the person who invented Bitcoin". There has been endless debate about the true identity of Satoshi Nakamoto, and Wright sought to bring this speculation to an end by revealing himself to the BBC, GQ and the Economist. It is not the first time Wright has been linked with Bitcoin, as Wired and Gizmodo obtained documents late last year which they claimed linked him to the cryptocurrency. But Wright is not seeking to step into the limelight -- far from it. He said: Photo credit: Julia Tsokur / Shutterstock 2016-05-02 11:20 By Mark

3 Microsoft cloud strength highlights third quarter results REDMOND, Wash. — April 21, 2016 — Microsoft Corp. today announced the following results for the quarter ended March 31, 2016: “Organizations using digital technology to transform and drive new growth increasingly choose Microsoft as a partner,” said , chief executive officer at Microsoft. “As these organizations turn to us, we’re seeing momentum across Microsoft’s cloud services and with Windows 10.” The following table reconciles our financial results reported in accordance with generally accepted accounting principles (“GAAP”) to non-GAAP financial results. Microsoft has provided this non-GAAP financial information to aid investors in better understanding the company’s performance. All growth comparisons relate to the corresponding period in the last fiscal year. During the quarter, Microsoft returned $6.4 billion to shareholders in the form of share repurchases and dividends. This quarter’s income tax expense included a catch-up adjustment to account for an expected increase in the full year effective tax rate primarily due to the changing of revenue across geographies, as well as between cloud services and software licensing. As such, the GAAP and non-GAAP tax rates were 25% and 24%, respectively. “Our continued operational and financial discipline drove solid results this quarter,” said , executive vice president and chief financial officer at Microsoft. “We remain focused on investing in our strategic priorities to drive long-term growth.” Revenue in Productivity and Business Processes grew 1% (up 6% in constant currency) to $6.5 billion, with the following business highlights: Revenue in Intelligent Cloud grew 3% (up 8% in constant currency) to $6.1 billion, with the following business highlights: Revenue in More Personal Computing grew 1% (up 3% in constant currency) to $9.5 billion, with the following business highlights: “Digital transformation is the number one priority on our customers’ agenda. Companies from large established businesses to emerging start-ups are turning to our cloud solutions to help them move faster and generate new revenue,” said Kevin Turner, chief operating officer at Microsoft. Business Outlook Microsoft will provide forward-looking guidance in connection with this quarterly earnings announcement on its earnings conference call and webcast. Webcast Details Satya Nadella, chief executive officer, Amy Hood, executive vice president and chief financial officer, Frank Brod, chief accounting officer, John Seethoff, deputy general counsel and corporate secretary, and Chris Suh, general manager of Investor Relations, will host a conference call and webcast at 2:30 p.m. Pacific time (5:30 p.m. Eastern time) today to discuss details of the company’s performance for the quarter and certain forward-looking information. The session may be accessed at http://www.microsoft.com/en-us/investor. The webcast will be available for replay through the close of business on April 21, 2017. Adjusted Financial Results and non-GAAP Measures During the third quarter of fiscal year 2016, GAAP revenue, operating income, net income, and earnings per share include the net impact from revenue deferrals. For the third quarter of fiscal year 2015, GAAP operating income, net income, and earnings per share include charges related to integration and restructuring expenses. These items are defined below. In addition to these financial results reported in accordance with GAAP, Microsoft has provided certain non- GAAP financial information to aid investors in better understanding the company’s performance. Presenting these non-GAAP measures gives additional insight into operational performance and helps clarify trends affecting the company’s business. For comparability of reporting, management considers this information in conjunction with GAAP amounts in evaluating business performance. These non-GAAP financial measures should not be considered as a substitute for, or superior to, the measures of financial performance prepared in accordance with GAAP. Non-GAAP Definitions Net Impact from Revenue Deferrals. Microsoft recorded a net $1.5 billion revenue deferral during the three months ended March 31, 2016, primarily related to Windows 10. Integration and Restructuring Charges. Integration and restructuring expenses were $190 million during the three months ended March 31, 2015. Integration and restructuring expenses include employee severance expenses and costs associated with the consolidation of facilities and manufacturing operations related to restructuring activities, and systems consolidation and other business integration expenses associated with the acquisition of Nokia’s Devices and Services business. Constant Currency Microsoft presents constant currency information to provide a non-GAAP framework for assessing how our underlying businesses performed excluding the effect of foreign currency rate fluctuations. To present this information, current and comparative prior period non-GAAP results for entities reporting in currencies other than United States dollars are converted into United States dollars using the average exchange rates from the comparative period rather than the actual exchange rates in effect during the respective periods. The non-GAAP financial measures presented below should not be considered as a substitute for, or superior to, the measures of financial performance prepared in accordance with GAAP. All growth comparisons relate to the corresponding period in the last fiscal year. Financial Performance Constant Currency Reconciliation Segment Revenue Constant Currency Reconciliation About Microsoft Microsoft (Nasdaq “MSFT” @microsoft) is the leading platform and productivity company for the mobile-first, cloud-first world and its mission is to empower every person and every organization on the planet to achieve more. Forward-Looking Statements Statements in this release that are “forward-looking statements” are based on current expectations and assumptions that are subject to risks and uncertainties. Actual results could differ materially because of factors such as: For more information about risks and uncertainties associated with Microsoft’s business, please refer to the “Management’s Discussion and Analysis of Financial Condition and Results of Operations” and “Risk Factors” sections of Microsoft’s SEC filings, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q, copies of which may be obtained by contacting Microsoft’s Investor Relations department at (800) 285-7772 or at Microsoft’s Investor Relations website at http://www.microsoft.com/en-us/investor . All information in this release is as of April 21, 2016. The company undertakes no duty to update any forward-looking statement to conform the statement to actual results or changes in the company’s expectations. For more information, press only: Rapid Response Team, Waggener Edstrom Worldwide, (503) 443-7070, [email protected] For more information, financial analysts and investors only: Chris Suh, general manager, Investor Relations, (425) 706-4400 Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://www.microsoft.com/news. Web links, telephone numbers, and titles were correct at time of publication, but may since have changed. Shareholder and financial information, as well as today’s 2:30 p.m. Pacific time conference call with investors and analysts, is available at http://www.microsoft.com/en-us/investor . IMPORTANT NOTICE TO USERS (summary only, click here for full text of notice); All information is unaudited unless otherwise noted or accompanied by an audit opinion and is subject to the more comprehensive information contained in our SEC reports and filings. We do not endorse third-party information. All information speaks as of the last fiscal quarter or year for which we have filed a Form 10-K or 10-Q, or for historical information the date or period expressly indicated in or with such information. We undertake no duty to update the information. Forward-looking statements are subject to risks and uncertainties described in our Forms 10-Q and 10-K. 2016-05-03 03:14 By Microsoft

4 ACM 2015 Technical Awards The ACM has announced the latest recipients of its four major technical awards, chosen for their contributions in the fields of systems software, cryptography, artificial intelligence, and network coding systems. The Association for Computer Machinery is the world's foremost professional membership organization for computing. Among its annual awards the Turing Award is the best known, and with a prize of $1 million the most lucrative, but this is just the tip of the iceberg. In the case of the technical awards recipients are selected by their peers for making significant contributions that enable the computing field to solve real-world challenges. The ACM System Software Award carries a prize of $35,000, with financial support from IBM and is: awarded to an institution or individual(s) recognized for developing a software system that has had a lasting influence, reflected in contributions to concepts, in commercial acceptance, or both. In 2014 the GCC (GNU Compiler Collection won the ACM SIGPLAN Programming Languages Software award and the 2015 Software System Reward recognizes Richard Stallman, the well- known head of the Free Software Foundation, for his role in its development. The announcement notes that Stallman has previously (1990) been recognized with the ACM Grace Murray Hopper Award. This is also worth $35,000, funded by Microsoft, and recognizes a single recent major technical or service contribution made by an individual aged 35 or younger. The 2015 award goes to Brent Waters, an associate professor at the University of Texas at Austin: Waters’ a new design enables an administrator to create a policy-specific decryption key that will enable decryption of only the underlying files that satisfy the policy. His functional encryption allows an administrator to create private keys that allow a decryptor to learn only a particular function of the encrypted data, thus limiting their view to what they need to know about the data. Eric Horvitz, a technical fellow at is the 2015 recipient of the ACM - AAAI Allen Newell Award, an award accompanied by a prize of $10,000 made to an individual: for career contributions that have breadth within computer science, or that bridge computer science and other disciplines. Horvitz, who was made an ACM Fellow in 2014 is is best known for his pioneering research in developing principles and models of computational intelligence and action. In related work on human-computer collaboration, he has developed methods that blend human and machine intelligence in problem solving, using models of human goals, competencies, and cognition. He is currently working on the 100-year effort to study on the effects of artificial intelligence on every aspect of how people work, live, and play, see The Effects Of AI - Stanford 100 Year Study specific theoretical accomplishments that have had a significant and demonstrable effect on the practice of computing. Its $10,000 prize partly is endowed by contributions from the Kanellakis family. Luby, who also became an ACM Felllow in 2015, is Qualcomm Technologies Vice President of Technology. His theoretical contributions to coding theory include Tornado Codes, Fountain Codes Codes and LT Codes, which have led to major advances in the reliable transmission and recoverability of data across mobile, broadcast and satellite channels. These four recipients will be formally honored at the ACM Awards Banquet on June 11 in San Francisco. At the same time the 2015 Turing Award will be presented to cryptography pioneers Whitfield Diffie and Martin Hellman, as announced in early March. The event will also see the presentation of the 2015 ACM-Infosys Foundation Award in the Computing Sciences to Stefan Savage. Savage, Professor in the Systems and Networking Group at UC San Diego's Jacobs School of Engineering receives this award: 2016-05-03 04:16 Editor

5 FileHippo News - powered by FeedBurner If there’s one universal truth about healthcare, it’s that there’s simply not enough access to it, at least not affordable access. So it’s no wonder that services like the video chat-based Doctor On Demand have cropped up, as well as hundreds of apps aimed at helping patients with every type of condition, both in their physical health and mental health. But as with every kind of app on the market, there are some great ones and there are some real duds. It’s not really a major issue if your knock-off version of Candy Crush doesn’t load properly or has a few bugs, but if you’re relying on a diabetes monitoring app and it crashes, that could be a serious problem. Apple is taking steps to correct that. While the tech giant isn’t getting into the world of medical monitoring (yet), they’ve created a toolset for app developers who are working to create genuinely useful medical apps. Apple’s CareKit helps developers reach a certain standard of service before unleashing their apps on people who are already resorting to using an app as part of their medical care. This new level of app support has already been put to work, and the App Store has rolled out several new medical apps for monitoring diabetes, pregnancy and infant care, depression, and more. One issue that has made Apple mobile devices perfect for the medical care connectivity market is their level of encryption. Medical information is so highly sensitive as to be protected by law, and the level of encryption allows for what is possibly the most secured method of sending and storing personal medical histories. It’s certainly safer than any paper file stored in a doctor’s office, and given the rash of ransomware attacks hitting hospitals and medical centers, it’s probably as good as or better than any desktop-based network. In what is becoming another example of Apple’s tech commitment, the company is currently not monetizing CareKit, and instead has provided this tool set to developers at no cost. The tools themselves had already been developed for other applications involving sensitive health data, so making them available to app creators wasn’t much of a burdensome expense. The post Sick? Apple’s Got An App For That. appeared first on FileHippo News . In one of the most high-profile “make the government look inept” cases in recent history, the FBI managed to break into a suspect’s iPhone. Correction: the FBI managed to scrape together enough money to pay a foreign firm to break into the phone. Now, after spending a reported $1.4 million (plus legal fees leading up to it) to get into that phone, the FBI has stated they will not reveal how it was done, in order to keep Apple from securing whatever vulnerability made it possible. Apart from the threat to personal liberty and the monumental waste of taxpayer dollars (since the phone has reportedly not contained any useful information), the FBI has openly stated that the unlock for this particular model and operating system are not expected to work on other devices, or even other Apple devices. For its part, the tech company has fielded multiple requests to break into iPhones for law enforcement, and continues to refuse to create a single “click here to infiltrate” backdoor method for law enforcement to use whenever they see fit. After rejecting the sentiment that the agency should get into the hacking business, the FBI is calling for greater cooperation between privately-held companies and law enforcement. That’s a very tricky request, considering the US Supreme Court’s ruling that mobile devices like smartphones are essentially the same as your house, at least as far as privacy and search and seizure laws apply. Law enforcement are not allowed to root through a suspect’s phone without a warrant, for example. Many users’ phones contain more incriminating information–like letters, photos, GPS tracking, and text messages–than a 21st century homeowner might keep on his property. And no matter how noble the cause or how dire the circumstances, the government cannot violate a suspect’s privacy and force him to incriminate himself. By that same token, a privately-held company has yet to comply with a court order to create a whole new mechanism by which to invade someone’s phone. The post FBI Will Not Reveal How It Got Into iPhone appeared first on FileHippo News . Yes, that’s right, the lawsuits have started, and as a result, there’s probably still quite a bit of life left in the story that dominated news headlines last year. The ‘victims’ are ostensibly suing Ashley Madison for having failed to keep their promise to remove all trace of their private and personal information even after they had paid extra for the “full delete removal” service. As was reported at the time, Ashley Madison didn’t actually follow through with their promise to fully delete information about its customers, and as a result when their databases were stolen and the information decrypted, most, if not all of their details were still held by the dating company. This information still included details such as credit card information and address details of users. The pending lawsuits also cite the fact that Ashley Madison used made-up female accounts in order to give men the false confidence that there were more women than were on the site, according to the Judges ruling. According to court papers filed, the plaintiffs in the suit filed in Missouri of all places wished to remain anonymous in order “to reduce the risk of potentially catastrophic personal and professional consequences that could befall them and their families.” This irony was not lost on the judge either, Judge John Ross wrote in his ruling that “the personal and financial information plaintiffs seek to protect has already been released on the internet and made available to the public.” While it is not considered unusual for plaintiffs in US court cases to be given anonymity in sexual abuse cases, Ashley Madison’s parent company successfully argued that as no actual preference for sexual activity had been released, the plaintiffs had no right to claim for this privileged status. To date, eight other plaintiffs suing Ashley Madison have done so using their real names. The post Ashley Madison Plaintiffs Have To Use Real Names, Judge Rules appeared first on FileHippo News . In a nice twist to standard tech company announcements however, the news was unveiled by Apple’s digital assistant, Siri. Whether Apple intentionally gave Siri the news to disseminate around the world or not, the news was given, and within hours, rumors were circulating as just to what would be happening at this year’s Apple Worldwide Developers Conference . (WWDC) The conference will be held in San Francisco from 13 th June to 17 th June. Siri, apparently, can’t wait. Of course, traditionally, the WWDC is where Apple normally unveil the latest updates of their hardware and software across the range. Unsurprisingly, this year’s WWDC looks to be more of the same, and why not. It works for Apple and develops high levels of hype for the company on a yearly basis. According to the internet of rumor control, by the time June 18 th rolls around, we can all expect to know an awful lot more about the latest versions of iOS, tvOS, watchOS, MacOS (currently codenamed Fuji) as mentioned above, and if hardware fans are lucky, also some details on new Mac Laptops. One of the bigger rumors circulating around certainly seems to be more to do with hope as opposed to any hard and fast facts. One of the supposed new features of iOS10 is that users will finally be given the option to hide all those unwanted apps that come pre-installed as standard on iOS. It might just be that we can all permanently lock away that pesky Compass app that hardly anyone uses after the customary “look, north is in that direction!’ What can be confirmed as hopefully something more than wishful thinking however, is that of the Siri controlled iCloud Voicemail. As well as organising your day Siri will, with a little bit of luck, also deal with all your unwanted calls. Of course despite all the internet whispering, no one will really know what Apple will show off to the world until the WWDC actually gets under way But good luck if you think you can just show up if you happen to be in San Francisco at a loose end that week. Tickets are retailed at over $1500, and back in 2012 they were all gone within a few hours of going on release. The post Apple To Reveal New OS Info On June 13th appeared first on FileHippo News . It’s been a pretty rough patch for Microsoft as of late, at least where consumer happiness is concerned. Following on the heels of several Windows 10 fiascos, the customer service department is surely swamped with complaints about the tech giant’s latest move: dropping the amount of free cloud storage its users get in OneDrive. Admittedly, this isn’t exactly breaking news. The company has been quietly warning users since last year that capacity changes were coming. What originally started as 15GB free with the options to pay for a 100GB or 200GB plan is now dropping to just 5GB in late July, with the option to pay for a 50GB plan. The paid plan comes in at $1.99, which is far more than what a competitor that rhymes with Sapple charges its customers for the same amount of storage. Even more upsetting was the end of unlimited free storage for Office 365 customers, who will now be limited to a set amount despite their paid status. In even more upsetting news for consumers, OneDrive will no longer offer the 15GB camera roll storage. Better back up those pictures while you can. Speaking of backing up your content, there’s also a deadline for you to move your files out of OneDrive if you’ve exceeded the 5GB; according to Microsoft , “You will be notified and will have 90 days’ notice to take action before your account will become read-only. If you are over quota after the 90 days, you will still have access to your files for 9 months. You can view and download them. However, you will not be able to add new content. If after 9 months and you are still over quota, your account will be locked. That means that you will not be able to access the content in your OneDrive until you take action. If after 1 year you fail to take action, your content may be deleted.” One of the biggest questions in this shift is why, and even though Microsoft addresses that very question on its OneDrive blog , the answer is anything but clear: “Our difficult business decision to change the storage limits came with careful analysis and thought. However, these types of decisions are never easy. We want to focus on delivering high-value productivity and collaboration experiences that benefit the majority of our users and these changes were necessary to ensure that we can continue to offer a collaborative, connected, and intelligent service.” Either way, customers have been warned. Move your content to a different cloud storage provider, or risk having it locked. The post Microsoft OneDrive Storage Gets Smaller appeared first on FileHippo News . FileHippo.com was started by two software enthusiasts in 2004. We’re really proud of what we have achieved over the past 12 years. Our aim was to create a site that was trusted by software enthusiasts and to have a reputation for offering software updates, expert reviews, and news of the best freeware and shareware apps in the market for Windows and Mac OS. We’d like to thank all our users for their continuing support! FileHippo continues to hand-pick all software listed on the site so that users can be sure they are downloading the best programs available! In order to keep providing safe and clean downloads, we have teamed up with Avira to give each program a safety guarantee so that our users can download with confidence. Of course, we couldn’t have come this far without our valued users, so we want to say a huge thank you to everyone for their support – here’s to the next billion downloads! We’re working on some exciting new features so stay tuned! The post FileHippo Smashes The 3,000,000,000 Download Barrier appeared first on FileHippo News. In typical data breaches, hackers work their way in, get what they’re after, and then head for the internet hills, occasionally stopping to brag about their exploits on some dark web forum after they’re done. But for last month’s data breach that resulted in hackers making off with $81 million from the Bangladesh Bank, more of the cybereffort involved covering their tracks than getting what they were after. A report from UK security firm BAE found that SWIFT– Society for Worldwide Interbank Financial Telecommunication –the cooperative of literally thousands of worldwide banks, was compromised by hackers who then used the installed malware to cover up the record of transactions to prevent notice. The transactions that resulted in moving the millions of dollars–far less than the $951 million they were reportedly after–were first taken from Bangladesh Bank’s account at the Federal Reserve in New York, then routed to accounts in the Philippines. There are some highly interesting details in this particular hacking event, and together they form the plot of a really outstanding cyberthriller novel. First, the security on SWIFT was reportedly subpar, with some allegations about the lax security of SWIFT’s network claiming that banks relied on $10 switches to connect to the network and that the system didn’t even have a firewall in place. Next, the hackers apparently infiltrated the full network instead of accessing usernames or passwords first, indicating a sophisticated form of attack. But don’t get too excited about their abilities; as TheNextWeb has reported, the $800M or so that the hackers didn’t make off with was due to their spelling error that triggered an alert on the system. As for the outcome: most of the $81 million remains unaccounted for, and Bangladeshi law enforcement has not identified the malware the hackers used to cover up their illicit activity. BAE will release its report on their findings today. The post Hackers Breach SWIFT Banking Software, Steal $81M appeared first on FileHippo News . We’ve all been there: “I gotta look up that guy’s phone number from my Facebook messages for this report.” Three hours and eighty-two cat videos later–along with a heated argument over the new trailer in the comments section of your brother-in-law’s post, one that led to soapboxing on the unnecessary aggression towards women who cosplay at conventions–you don’t have the guy’s phone number and your report isn’t finished. The above scenario is part of the blessing and the curse of social media. Despite dire warnings that technology is turning us all into hermits, the very addictive nature of social media proves the opposite is true. We have a desperate need to look at pictures of our college roommate’s lunch, say something supportive on our neighbor’s weight loss update, and keep tabs on the Earth Day celebration going on at the local elementary school (even if we don’t actually have children). But a new Chrome plugin has launched, one that’s designed to help you use social media wisely while still avoiding the pitfall of getting pulled into an endless loop of wasted time. Called Focusbook , it works in a way that’s reminiscent of an award-winning social media filtering app aimed at preventing cyberbullying called ReThink. While Rethink made users pause and reconsider their words based on keyword triggering, Focusbook lets you set up individualized parameters for why you’re on Facebook in the first place, then gently (and sometimes not-so- gently) reminds you that you’ve got work to do. Unfortunately, as TechCrunch ‘s Jon Russell points out, “Focusbook won’t cure everyone — there’s still Facebook’s mobile apps and their attention-grabbing notifications — but it might make you aware just how much time the social network eats up.” Of course, you can ignore Focusbook’s warning system just as easily as you can ignore the suspicious looks from the boss, too. Until you develop the ingrained sense of self-control that keeps you from clicking on every headline, plugins and newsfeed blockers are here to help. The post Chrome Plugin, Focusbook, Helps You Avoid Social Media Drain appeared first on FileHippo News . You could be mistaken when you look at the official Trello Blog post for thinking the real news is that their cartoon husky Taco has learned to pilot a plane in high orbit, if you just look at the picture that accompanies the news. This however is far from the truth. No, the real story here is the fact that the people at Trello have effectively crowdsourced the browser based popular project management tool into a truly international WebApp. Trello is a project management tool that allows people working on projects to do so in a collaborative fashion and communicate, share and upload files that anyone with access to a specific work board, can use. Up until now however, frequent users of Trello will of course be more than familiar with Taco, using him as they have to handle language translations. For those who don’t, Taco is effectively the Trello equivalent of the old Help Paper-Clip, but crucially, one that actually does a pretty good job. But in their ongoing efforts to increase Trello’s user base and usability across language barriers, over 500 ordinary everyday users of Trello recently volunteered their time to crowdsource the new languages. Over a 4 month time period, 47,000 words were translated into 16 new languages. This means that in less than half a year, Trello has gone from supporting 4 languages, into 20. And that’s not bad going. The result of all that work and effort on behalf of people who gave up their time now means that Trello, that at the start of 2016 only supported Portuguese, French, Spanish, German, and well, obviously, English, now has an ability to help people plan and work on their projects in the following languages: Finnish, Norwegian, Swedish, Russian, Polish, Hungarian, Ukrainian, Czech, Dutch, Italian, Turkish, Thai, Japanese, Traditional Chinese, Simplified Chinese, and Vietnamese. But it’s not like Trello automatically knows if you are from Norway or Japan. New and current users of Trello will still have to change their language settings themselves to get the most out of the project tool. Fortunately, it’s rather easy to do, and the people at Trello have even produced a simple guide on how to do it. But there are some caveats users in the new languages should be aware of. As the Trello blog itself says: “Since real, live people did 100% of the translations, you might see an error or two. Hey, everyone makes mistakes. Please let us know if you see something that doesn’t look quite right by emailing us at [email protected] .” The post Trello Crowdsources Its Way To 20 Languages appeared first on FileHippo News . A federal court judge has sentenced the SpyEye malware creators to almost 30 years behind bars for their crimes. The pair of cybercriminals were found guilty of developing the software and distributing the resulting malware to anyone willing to pay for it. They were also found guilty of using the malware they designed to steal online funds themselves. The SpyEye software has widely been held responsible for huge losses for online financial service businesses across the globe. The news was announced by the US Department of Justice on Wednesday. 27-year-old Russian, Aleksandr Panin, was sentenced to spend nine and a half years in jail, while his hacker-in-arms, Algerian national,Hamza Bendellad, also 27, is set to serve 15 years for his part in the crimes. SpyEye was effectively nothing more than a sophisticated Trojan virus whose sole aim was to gain access to and steal sensitive sensitive financial information from unsuspecting users such as bank account numbers and credit card information, as well as duping people into unveiling pin numbers and logon details. Infected machines could be used to send on further malware and also forward on mountains of spam mail. At its height, SpyEye was reported as having infected more than 50 million computers worldwide. The DOJ stated that the damage inflicted by the fallout from SpyEye was estimated at being around 1 billion dollars in terms of financial cost to banks globally US attorney, John Horn said: “It is difficult to overstate the significance of this case, not only in terms of bringing two prolific computer hackers to justice, but also in disrupting and preventing immeasurable financial losses to individuals and the financial industry around the world.” The FBI were aided in their investigation by some rather notable tech companies, including Microsoft, Trend Micro, and several international police forces. The post International SpyEye Bank Hackers Receive Lengthy Jail Sentences appeared first on FileHippo News . 2016-05-03 01:15 feeds2.feedburner

6 Microsoft Limits Cortana To Bing Search, Edge Browser Cortana can no longer be used to view results in third-party browsers or search engines in Windows 10. As part of an OS update, Microsoft has restricted its personal digital assistant to work along with only Bing and its Edge browser. The Cortana search box, located in the bottom left corner of the Windows 10 desktop, is a key portal for Windows 10 users to access documents, apps, settings, and Web search results. Now those Web searches will be limited to Bing and Microsoft Edge. Microsoft says the restriction is intended to protect the Cortana user experience, which it created to rely on its own browser and search engine. Previously, workarounds allowed users to view search results on Google, Chrome , Firefox, and other third-party search engines and browsers. [Microsoft's Mobile Mayhem: 9 Contributing Factors ] "Unfortunately, as Windows 10 has grown in adoption and usage, we have seen some software programs circumvent the design of Windows 10 and redirect you to search providers that were not designed to work with Cortana," wrote Ryan Gavin, general manager of Search and Cortana, on the Windows Blog . "The result is a compromised experience that is less reliable and predictable," he explained. "The continuity of these types of task completion scenarios is disrupted if Cortana can't depend on Bing as the search provider and Microsoft Edge as the browser. " As a result, the team is restricting Cortana to only work with Edge and Bing effective April 28. Windows 10 users can still opt to change their default browser to Chrome or Firefox , or set their default search engine to Google. Microsoft rolled out several features for Cortana at the same time it announced these restrictions, which prevent the new capabilities from effectively working without Edge or Bing. One of these additions is the ability to troubleshoot tech via Cortana. You can search "Bluetooth not working" in the Cortana box, for example, and a Bing search result will pop up with a video answer to help. If you need to do some online shopping, you can start by conducting a Bing image search. Scroll through the results, right-click your choice, and select "Ask Cortana" to learn more about the chosen product. In a few situations, using Cortana may save some money. If you're looking for a specific store, like Best Buy, type the name into the Cortana box and select the top search result -- in this case, www.bestbuy.com -- and the digital assistant will provide a few coupons. While Microsoft reports it's implementing these restrictions to improve the user experience, there are likely a few more reasons why Cortana will only work with Edge and Bing. Microsoft made Windows 10 available as a free upgrade to Windows users at the time of its official launch in July 2015. The idea was to put Windows into the hands of as many users as possible. However, Microsoft is still looking for ways to make money off the OS outside the fees it charges OEMs to put it on their devices. Windows 10 and Bing are tightly woven together, and increasing the user base for Bing will generate more ad revenue for Microsoft. Windows 10 has already boosted search advertising for Microsoft. In its most recent earnings call , the company reported search advertising revenue grew 18% in constant currency during its third fiscal quarter of 2016. Growth was driven by an increase in Windows 10 use. 2016-05-03 03:13 Kelly Sheridan

7 Google's one-handed keyboard just solved your big Android phone problem Google's Android keyboard app is getting a massive update today. In addition to a one-handed typing mode, the Google Keyboard app will allow users to set the height of their keyboard and features a ton of tweaks to its interface. The biggest new feature of the update is the addition of a one-handed mode. Users can activate it with a long-press on the comma key, which brings up buttons for settings and one-handed mode. You can move the keyboard to left or right, depending on which hand you want to use. There's also a toggle to go back to full screen. Microsoft offers its own one-handed typing mode with its Word Flow keyboard, which is available now for the iPhone and may come to Android in the future. Gesture typing is also improved, putting dynamic suggestions at the top of the keyboard instead of floating around with your thumb. This makes it easier to see and accept suggestions. There's also a new gesture that lets you delete entire words with a left-swipe from the delete key. SwiftKey has a similar feature but requires turning off gesture typing in order for it to work. Google's solution allows both gesture typing and word deletion. Other minor updates include the ability to show key boarders as a visual aid to help users hone in on the key they want to hit. The old Holo keyboard themes are gone, so you can only choose between a light and dark Material (flat) theme. The Google Keyboard update is rolling out in stages at the moment, so you may not see the update on your phone yet. Soon you'll be juggling your phone and latte like a boss. Article continues below 2016-05-02 20:23 Lewis Leong

8 Insurance brokerage is broken I’ve worked on a number of projects over the past year as an MBA Associate with the General Catalyst team. One of my principal areas of focus has been a deep dive into the emerging field of insurance technology startups, piggybacking GC’s early work in the space with investments like Oscar Health , TrueMotion , Gusto , Super , Livongo and Freebird. As we outlined in our last post , working closely with the GC team across both coasts, we developed a framework for thinking about the state of the sector today and the most exciting opportunities going forward: online aggregators and brokers, new direct-to-consumer brands and new models. In even just the last few weeks, it seems the excitement and momentum around the insurance tech sector has continued to accelerate. Coming off huge growth in 2015, early-stage insurance tech deal activity is expected to set new records in 2016, with 24 seed or Series A deals just two months into the year. There’s now an insurance technology industry conference, OnRamp , bringing together relevant stakeholders. We are seeing the most innovation and activity in online aggregators and brokers, and thought it timely to share some thoughts on the space. Given that insurance is fundamentally about the trading of information (consumer data for a policy product), if you think about it, the most logical and near-term place for technology disruption should be found in distribution. Looking at any industry where we make big purchase decisions that involve the trade of information — buying a home, car, plane ticket — the Internet has become the key facilitator. Yet, across product verticals, in-person agents have remained the primary distribution channel with dominant penetration: commercial (~99 percent), life (99 percent), homeowner’s (94 percent) and auto (73 percent). Direct versus agent penetration across verticals On many levels, it makes sense that agents have remained so entrenched in the market — they can be an incredible source of value creation. Insurance policies are complex financial instruments. Consumers want to purchase products through a trusted channel and be coached through complicated decisions. Insurance carriers want to validate the customers to whom they are selling, and so have limited the degree to which customers can complete transactions online. And as time has passed, agents have become more and more ingrained as the key sales channel, making it easier for carriers to “kick the can” down the road, delaying the transition online rather than disrupting their entire distribution model. One thing that is often overlooked is just how much money these agents make. In 2014, insurance agents made $325 billion in commission revenue ($300 billion in commission spend by insurance carriers plus $25 billion in fee spend by consumers). And because of their critical role as an intermediary in closing policy sales, agents make significant operating profit margins (~12 percent), which increase with the complexity of the product. Insurance agent relative profit margins by category Source: “Agents of the Future — Evolution of P&C Insurance Distribution.” McKinsey, 2015. But as with every other industry, we are beginning to see the Internet enable a new world of comparison shopping, online transactions and streamlined customer service for the insurance consumer. A handful of factors are driving this evolution: Growth of e-commerce and changing consumer demographics. Online comparison shopping is the new norm for everything from diapers to plane tickets to real estate. And with the rise of the millennial buyer, the insurance industry needs to find new ways to sell to audiences that are used to being reached and interacted with very differently. Improvements to front and back-end tech. Improvements in UI create a better shopping experience, more sophisticated analytics enable carriers to price and fulfill policies online and smartphones set expectations for mobile native solutions that remain with the customer everywhere they go. Ageing agent workforce (average age: 59). The agent workforce is on the brink of retirement, and less in touch with consumer demands in a digital world. Increasing consumer responsibility. With greater employee churn and pressure to cut costs, companies are increasingly shifting the purchase burden to employees. As consumers become more responsible, and individual plans win share over group plans, online aggregators should become more compelling. Proof points in other markets. Businesses like Alibaba in China and Check24 in Germany ($100 million + EBITDA with expansion into insurance, consumer financial, credit products) are evidence that consumer brands can be built to compare and buy these types of utility products. We saw this transition from in-person agents to online distribution in travel — with Kayak , Orbitz , Expedia and many more — and are seeing the beginning of this evolution in insurance. Agent decline: travel as an example As we think about what it will take to succeed as an online insurance broker, we see a few table stakes requirements for success: Direct marketing and consumer education: Best-in-class customer acquisition and engagement (content + advice) to build the brand and consumer trust as an intermediary. Deep customer interactions: Ability to facilitate the full transaction online; sophisticated CRM for personalization, retention and cross-sell; and expansion to mobile native. Customer service: For lines that are more complex with large insured sets — deep product expertise, dedicated support and claims management (mimicking the best of what offline agents offered). It’s worth noting that Google recently announced it is eliminating its comparison tool for insurance and financial products. As a search product, Google never really veered away from its standard GoogleCompare UX or invested as a curator of content. Their experience seems to highlight the importance of consumer engagement and interactions discussed above, as well as the operational complexity of integrating with carriers. Thinking more deeply about the specific categories of players, we see three types of online brokers beginning to emerge, each with its own set of success factors. Consumer players The greatest barrier to online distribution has been the carrier’s ability to automate a complex underwriting process online (without paper documentation or in-person interaction to validate customer information). However, new technology adoption and data sets are changing this dynamic. We now have sophisticated telematics and driving data for auto insurance. And in the not-so-distant future, you can imagine a Homeowner’s policy with underwriting supported by satellite or drone imagery to view the property, as well as risk data by cross street. It makes sense that the simplest products (where tech has already enabled automation of underwriting) have been the first to shift distribution online. We’ve seen a lot of early activity in auto insurance aggregators ( Zebra , CoverHound , Goji ), where sophisticated predictive modeling enables seamless comparison shopping. While these players are gaining an early foothold, in the long term, we believe the winning hand in online aggregation will be a multi-product player. One way of thinking about this that came out of internal conversations with GC Managing Director, Phil Libin , relates to the difference in purchasing behavior for aspirational versus compulsory products. When you think about opportunities to create a search platform online — the Zillows, Airbnbs, Indeeds — the brands that have succeeded are typically focused on single categories where the item you’re shopping for is something you talk about and are really excited about. As consumers, we spend time researching and comparing purchase decisions when they are emotional and say something about our identity. However, given the painful buying process and commoditized nature of the insurance product, consumers derive little intrinsic joy from shopping for insurance. Instead, the real value proposition will come from creating a single, aggregate destination — to complete the entire buying process and deliver a service experience as quickly and painlessly as possible. The brand value will come from the efficiency of “getting it done” rather than the aspiration of acquiring the product. An online aggregator in insurance that can holistically make the experience better across multiple categories has huge potential. Because some verticals (like auto) are more prepared for the online transition, it makes sense that many players have come out of the gates to nail a single product, with plans to cross-sell over time. We have also seen companies like PolicyGenius tack on other personal lines with low direct penetration (renter’s, pet insurance) where customer acquisition cost and cross-sell opportunities are relatively cheaper. Others, like Cover , are using mobile native to leverage camera and location functionality for better lead capture and less work for the consumer. Key challenges consumer-focused online brokers may face: Commercial players There has been a lot of excitement and buzz around commercial insurance brokerage, largely catalyzed by the stratospheric rise of Zenefits. As a company, Zenefits focuses on facilitating benefits and personal lines (health, disability products) and sells into HR. We see another tremendously exciting opportunity in the commercial space, with products like general liability, directors/officers and cybersecurity insurance that are more focused on corporate risk and have a different buyer in the COO, chief compliance officer or CTO function. Any business generally needs to purchase upwards of 5-7 different types of insurance to get started. These policies are typically sold over the phone, through PDFed documents with a similarly clunky claims management process. Compounding the problem, carriers of commercial lines are more regional and fragmented, so corporations often have to herd cats to purchase adequate coverage. A significant part of the opportunity here is to digitize and curate that experience into a streamlined workflow. As with personal lines in the consumer space, a player that can serve as a single throat to choke and trusted source can be incredibly helpful. We’ve seen a handful of players emerge in this space to provide price comparison and various policy management tools — notably Insureon , emBroker , CoverWallet and Next (which will serve as a Managing General Agent, a model we will discuss in later posts). One particularly attractive feature of commercial insurance is the maintenance revenue. Today, one agent makes the initial sale, capturing the upfront commission. However, after the transaction, the actual policy management is still highly competitive. Other agents actively try to steal this business to become the “agent of record” to earn a fat maintenance margin over the life of the policy. If companies can use technology to build a better way to capture those customers, and the associated maintenance revenue, there is an opportunity to access an annuity revenue stream with a relatively small amount of work. So what we’re looking for in SMB is someone who can build a system to: We’ve seen a handful of interesting companies in this space, each with creative and slightly different approaches. Some are acquiring SMBs through SEO, recognizing certain milestones that trigger businesses to purchase insurance. Others are giving away software and solutions like claims management as a way to acquire customers and make the relationship sticky. Players with deep hooks in SMB (e.g. Zenefits, Intuit, Gusto) through payroll or benefits management as a base for expansion may also be well positioned. Technology to empower existing infrastructure (in-person agents) While strong secular trends suggest the disintermediation of the in-person agent over time, new software solutions may help some maintain staying power. Considering other industries that have made the transition online, many tech-enabled solutions have helped optimize (rather than displace) existing infrastructure (think Compass for real estate brokers). In the more commoditized insurance products, technology may enable brokers to achieve the scale and efficiency necessary to operate profitably. In complex lines that require more sophisticated underwriting and consumer education, we see tech delivering creative ways to differentiate on service. We’ve yet to see specific solutions in this space, but believe there is real potential, and welcome submissions for companies that are considering how technology might empower the in-person broker of the future. One idea we’ve discussed is whether there is a way to build a marketplace for agents, enabling each to promote her/his own business and scalably acquire customers. Historically, independent brokers have struggled with customer acquisition in a fragmented market. With flat commissions and pressure on margins, the need to grow scale and increase to a more regional or even national focus is growing. Building a marketplace model that enables agents to get licensed and reach customers across state lines, with ratings and reviews to differentiate, might prove valuable. Because insurance products are relatively commoditized, the challenge would be in developing a model that has more durable network effects where brokers differentiate outside of price. While geography might be the premise for lead sharing in a marketplace model, another potential lead-sharing opportunity in this space could evolve through virtual teams that own the full stack of insurance products. We can imagine a model where teams of specialized agents across different product lines participate in referral sharing, and also leverage shared back office services for operational efficiency. In this way, agents can achieve scale and amortize the customer acquisition cost while still maintaining specialization in more complex lines. Finally, the emergence of more bespoke, niche insurance products is creating opportunities to enable agents with analytics to more effectively consult clients and support the underwriting process. Given the complexity of cyber insurance, today, large carriers like AIG ultimately do the underwriting while companies like Accenture perform assessment and prep. There is a category of software that’s become general cyber risk assessment, understanding the shared vulnerabilities of companies and creating a risk score. We can see agents armed with software solutions from cybersecurity companies that deliver highly customized assessment and underwriting. A similar example of this model is a company called Syndeste , which provides analytics tools for brokers to assess and price flood insurance risk. While many factors are driving the tipping point in the online distribution of insurance, the thread that ties it all together is actually a simple one: changing demographics. The millennial generation has tremendous buying power, and will soon become the industry’s primary customer, whether in consumer or commercial lines. Insurance companies cannot expect to sell the same way to a cohort with fundamentally different purchasing habits and expectations. Players that can find new ways to sell insurance products — to new audiences that are used to buying products differently — are poised to capitalize on a huge opportunity. Thanks to Spencer Lazar and Prateek Alsi at GC for their help exploring this space and drafting this post. 2016-05-02 20:16 Whitney Arthofer

9 Tesla’s bioweapon mode is a stroke of genius for developing markets Tesla today shared details of how effective its particulate filters are. Spoiler alert: They are so good, not only do they clean up the air inside the car, they make the world outside the car cleaner, too. It may not be obvious why this matters: The company is aiming squarely at a very specific type of customer, which may be the company’s way into markets like China and India. Let’s be frank for a second: Cabin air filters are nothing new; they’ve been rolled out in luxury cars since the late 1970s, and across a broader number of vehicles from the mid 1980s onward. HEPA filters in cars are more novel, not least because for this level of filtration to work, the car must be pretty air-tight to begin with, which traditionally hasn’t been a priority in automotive design. It’s hard to see adding HEPA filters in the first place, but can adding a layer of one-upmanship on top of that with a Bioweapon Defense Mode button be seen as anything other than a spectacular PR stunt? What nobody seems to have done so far is ask why Tesla is making such a big deal out of it. The Tesla blog post offers some hints of the most obvious kind. Talking about bioweapons is a way to catch the headlines (just look up there! I fell for it, too!), but the real talking point is that by using industrial-grade particulate filters, the Tesla Model S and Model X are spectacularly well- suited for use in environments where pollution is off the charts. Of course, the current Tesla models are a lot of things, but one thing they ain’t is cheap . Put the two together, and you get a Venn diagram of Tesla’s target audience here: People who have access to significant amounts of money and who suffer from tremendous amounts of pollution. Around 7 percent of the top 1,000 most-polluted cities in the world are in the U. S., which is the first piece of the puzzle: Creating cars that are particularly well-designed for your home market is just common sense — especially if you’re an electric car company who inherently has a horse in the race when it comes to making a statement about pollution. Looking at the rest of the data is far more interesting, however, and offers some clues as to why this matters to the car manufacturer: India, China, Turkey, France and Germany all feature heavily in the top 1,000, and, while not all markets are equally affluent (average GDP per capita varies wildly between these countries), there is no denying there is a large number of people who can afford — and do buy — luxury cars in all these countries. If we’re looking just at the countries that suffer from the most severe amounts of pollution, the data changes dramatically. In the graph below, I’m looking only at cities that are registering at above the WHO’s recommended 25 µg/m³ in the “most polluted” data. It comes as no surprise that pollution has a choke-hold on China’s economy, with a recent report suggesting that a staggering 6.5 percent of the country’s GDP is being spent on pollution-related costs . India is also struggling tremendously; the country has the dubious honor of claiming 13 of the 20 slots in the top most-polluted cities in the world. Only today, a 20-minute documentary entitled Death By Breath was released , exploring just how bad the air quality is in cities like Delhi, Patna and Gwalior. What’s interesting about both of these markets is that they may just be perfect target markets for the sort of thing Tesla is trying to accomplish. I believe that thinking “Hey, the HEPA filters make Tesla great for polluted places” is the wrong way of looking at it: It’s the other way around. Tesla was looking at the markets where it wants to make a huge splash, and added the advanced filtration precisely because these markets are struggling with severe pollution problems. As I mentioned, HEPA filters in cars are nothing new, but the marketing around them has usually been subtle and understated, not to mention slightly negated by the fact that that giant, gas-chugging SUV you are driving may well have the cleanest passenger compartment in the world, but you’re still driving around and being part of the problem. By being a purely electric car company, Tesla is able to take the high road and offer something unique to an emerging class of wealthy individuals: People who care both about the air they breathe and about not being part of the problem. In the world of luxury cars, Tesla is priced relatively averagely: There are a lot of different cars to choose from in this segment, and having a strong differentiating factor will make a tremendous different. Being an EV is only a small part of the appeal, doubly so in markets such as Beijing, Hong Kong, Delhi and Doha, where the Venn diagram of rampant pollution and concentrated wealth are at their peak. 2016-05-02 20:16 Haje Jan

10 Michael Dell reveals new branding scheme for the Dell-EMC conglomeration Michael Dell today revealed the new names, and yes we are talking multiple names, for the artist formerly known as the Dell-EMC deal. EMC will be deprecated for the main branding Dell Technologies, but will live on for the enterprise brand Dell EMC while the client services business will be called Dell, Inc. according to multiple reports. Confused? I’m sure you’re not alone, but Dell was reportedly very excited about the new brands as he spoke about them on stage at EMC World in Las Vegas today. I suppose when you spend $67 billion, a few extra names makes sense — more names for your buck. Other brands like VMware, Virtustream, RSA and Pivotal will also reportedly live on. If you aren’t familiar with the deal, it has gone through some twists and turns, but last October Dell surprised the world by announcing it was buying EMC for $67 billion in what’s believed to be the largest technology acquisition in history. It involves a mountain of debt, approximately $40 to $50 billion, depending on which reports you believe, and it will likely require selling off pieces of both companies to pay the deal. We’ve already seen Dell sell its services division , formerly known as Perot Systems, to NTT Data for $3.05 billion and spin off SecureWorks in an IPO last month. We’ve seen EMC announce layoffs while rumors swirl it’s trying to sell its content management product, Documentum , ahead of finalizing the sale to Dell. We’ve seen VMware suffer wicked stock price swings since the deal was announced, and announce layoffs of its own. We might not have seen the last of these various side deals ahead of the closing. With that much debt, it’s clear Dell will have to sell off some more of these pieces to help pay for the deal. It’s entirely possible we will see it sell off at least part of its stake while maintaining majority control in VMware. EMC owns approximately 80 percent of VMware, which operates as an independent company. Dell believes by combining with EMC and getting bigger, it will put it in a position to compete in the enterprise market. It’s not clear that’s a valid thesis or if Dell can justify the debt for buying EMC. Only time will tell if it got a reasonable return on its investment. What we didn’t know until today is what the new company will be called. It’s worth noting that the deal still isn’t official, but it is expected to finally go through some time later this year if there are no other glitches or shoes dropping, that is. 2016-05-02 20:16 Ron Miller

11 SoundCloud turns on ads and Go premium subs in the UK and Ireland Now that SoundCloud has inked licensing deals with all the big music rights holders, the startup is wasting no time rolling out subscription services and advertising to a wider number of markets to better monetize its 175 million users and compete better against the likes of Spotify, Deezer, Apple Music and the rest. The music-streaming company — sometimes referred to as the YouTube of audio for its wide array of user-generated content — is today expanding its SoundCloud Go premium subscription service to the U. K. and Ireland, a month after first launching it in the U. S. As with the U. S. version of Go, users will have ad-free access, plus a Spotify-style selection of millions of premium tracks alongside the wider SoundCloud catalog of music and podcasts — some 125 million tracks in all for a fee of £9.99 or €9.99 per month after a 30-day free trial (the U. S. version costs $9.99 per month). One month from the launch of Go, SoundCloud is not giving away much about how it has been received so far in the U. S., the only market where it has launched so far. “The o verall feedback is extremely positive, and we’re hugely excited,” was the most that Eric Wahlforss, SoundCloud’s co-founder and CTO, would say in an interview. Of course, there are likely a lot of people who have signed on for the free trial, so it will be a waiting game to see how many stay. At the same time, to pick up more audience, Wahlforss said SoundCloud is looking for ways of leveraging its social media credentials. (On SoundCloud, in addition to being able to follow people, you can tag tracks with your own insights and respond to comments from others, creating a kind of conversation unique to the platform.) The company has been working with Twitter to make audio content playable in-stream (most recently, SoundCloud started working in Twitter Moments ). And Wahlforss also brought up Snapchat as “a super interesting company” for being “very real time and authentic, how we want and like SoundCloud to feel as well,” and for being compatible in another way, too: “There are very few companies that have over 100 million uniques with a lot of them millennials.” SoundCloud and Snapchat both do. No comment on whether the two are working together, but this sounded like a very strong hint that if they are not already, SoundCloud wants to be. And for those who opt to listen to SoundCloud for free, brace yourselves for ads of many formats. SoundCloud said these will include audio spots; in-stream “native” ads; promoted profiles; and creator partnerships. I asked SoundCloud to clarify what “native ads” will look like and it seems that “native” in this case will not necessarily be a revival of advertising jingles by musicians. “Brands are choosing either existing tracks from creators or may be able to inspire a track, but it won’t be a direct endorsement,” a spokesperson told me. Regardless, the main idea, it seems, is to not just put in more advertising, but to do it in a way that ties in the artists themselves — 12 million and counting on the SoundCloud platform today — so that if they choose, they can use ads as an extra revenue stream alongside whatever royalty deals they may have in place around their actual music and other audio content. Alongside the ability to track music and choose what tracks are part of SoundCloud’s free and paid tiers, the ads become something that artists can also control. “The introduction of advertising will ensure listeners can continue to experience SoundCloud for free, as well as offer creators the opportunity to be paid for the work that they share,” the company noted. This is also an attempt to respond to a wider issue around streaming services like SoundCloud (and others), which have been criticized and shunned by some artists who believe that they are not getting paid fairly for their work. Ads and subscriptions are SoundCloud’s latest efforts to generate revenues, after initially starting out with SoundCloud Pro. Aimed at creators rather than consumers, Pro comes in two tiers of $7 and $15 per month and gives people the ability to upload more than the standard 12 hours of audio, plus analytics and more content controls. 2016-05-02 20:16 Ingrid Lunden

12 High schooler’s 3D printed ‘mini-brain’ bioreactor accelerates Zika research What are you planning on doing this summer? Probably not designing a revolutionary new bioreactor with which a thousand “mini-brains” can undergo testing. You’re probably not designing a bioreactor at all! But New York high schooler Christopher Hadiono did just that, and his powerful and efficient 3D-printed machine is now beginning to make waves. Hadiono put the machine together during a summer internship in the lab of Johns Hopkins neurology professor Hongjun Song. The SpinΩ, as it’s called, is cheap and versatile, as Song and others demonstrate in a recent paper. Mini-brains themselves aren’t a new idea: they’re basically tiny collections of stem-cell-derived neurons that can be experimented on as if they were developing brains. They’re not perfect, but they’re useful, and the more you have, the better. Most of Hadiono’s bioreactor can be created in an ordinary 3D printer, though of course it must be augmented with the precision parts needed to perform experiments. Not only is it cheaper to make ($400 versus about $2,000 for a commercial platform), but it’s more compact, and only a tiny amount of nutritive fluid needs to be used for each one. The result is that for a fraction of the cost, you can fit 10 times the number of mini-brains inside a standard incubator. “I was shocked,” Song told Spectrum News , which reports on autism-related developments. “We did not think that even a biotechnology graduate student could make this into a reality.” Song didn’t wait long to put the device into action: He and others recently published a paper in the journal Cell that not only details the engineering of the SpinΩ itself (including printing files ), but also an experiment that appears to strengthen the link between Zika infection and microcephaly. Other labs are also getting in on the SpinΩ fun and building their own, Song confirmed to TechCrunch in an email. There has also been interest from equipment makers in licensing or otherwise employing the system. Don’t worry — Hadiono is still involved, and his name is on the patent application. 2016-05-02 20:16 Devin Coldewey

13 Tastemates helps you find people who like what you like If you’ve ever forged a connection with someone because you had the same favorite movie or TV show or book, a new startup called Tastemates has built a social network around that experience. Or as CEO Jon Vlassopulos put it, “We’re trying to codify serendipity” — those surprising moments of, “Oh my God, I can’t believe anyone else likes that as much as I do!” What does that look like as a product? Well, users are asked to swipe to indicate their taste in entertainment — specifically, indicating whether they liked a given movie, TV show or music. Then, as the app gets a better sense of what you like, it will recommend other things you might like. And it will bring up the activity of users with similar tastes, so you can say whether or not you agree with their opinions and even connect directly. As Vlassopulos showed me the app, particularly the ways you can filter the other users you see based on things like age and gender, my mind immediately jumped to dating: Is this meant to be a more natural way to connect with potential dates, based on interests rather than just swiping through photos? After all, when you connect with someone on Tastemates, the app even recommends ways to start a conversation based on shared interests. “I wouldn’t say that,” Vlassopulos told me. Not that he’s ruling out dating as a use case, but he said it’s more about finding your tribe, whether that’s for a romantic connection or otherwise. And in his view, being a general social app makes it better for dating, too: “Would you rather go to a party that’s a ‘singles party’ or a regular party and there are singles there?” Vlassopulos also put a big emphasis on using data that’s explicitly shared in the app — from an advertising perspective, Tastemates doesn’t need to guess what your interests are, because that’s exactly what you’re revealing. That approach extends to the updates you see. Instead of just using “a black box of algorithms” (as Vlassopulos put it), there’s a slider that lets you determine how similar people have to be before they show up in your newsfeed. In some ways, the swiping experience reminded me of a new app called MightyTV , but Tastemates seems to place a stronger emphasis on the social experience, particularly on connecting with people you don’t already know. Of course, you can already share your interests on the big social networks, but Vlassopulos argued that this has been a much less important part of their experience over time. Consider, for example, how prominent your favorite movies and bands were in the Facebook profile of 10 years ago, and how that’s now mostly hidden and ignored. Tastemates is currently available for Android. Vlassopulos said the team is also working on an iPhone app, but he’s particularly excited about Android because of its international reach and because “Google has been very good to work with.” 2016-05-02 20:16 Anthony Ha

14 14 Shopping Quizzes is a quiz-based recommendation engine for e-commerce sites A key tenet of e-commerce is the recommendation engine. If implemented correctly, it can be a major sales driver for online retailers. However, most sites normally just implement a engine that guesses what shoppers want, without any input from the shoppers themselves. Shopping Quizzes has created a recommendation engine that actually asks shoppers what they’re looking for, instead of relying on guesses obtained from their browsing history. Here’s how it works: An e-commerce proprietor will enlist Shopping Quizzes to analyze their product categories and write custom quizzes tailored for a category of product. For example, a site may ask for a quiz designed to help users choose a black dress. The startup will then craft a quiz that asks questions like “cocktail or day dress,” “sleeveless or sleeves” and “fitted or loose.” Then, based on a shopper’s answers to all these questions, the engine recommends three top choices that fit all the shopper’s parameters. On the surface, this sounds basic. But the company explained that due to the exponential nature of the questions, a four-question quiz can eliminate more than 90 percent of the wrong products, leaving a much higher chance of the shopper finding one they actually want to buy. But does this actually increase sales for the site? According to Shopping Quiz’s numbers from their beta, yes. One A/B test found that shoppers prompted to use a quiz generated 22 percent more sales than the shoppers who didn’t see a quiz. Any online shopper knows quizzes aren’t new. However, Shopping Quizzes claims to differentiate itself by only asking questions directly relevant to a shopper’s end goal. So while other quizzes may decide that a shopper is preppy or casual and offer a range of products based on that conclusion, each shopping quiz will only recommend three specific products that the shopper already indicated they are interested in purchasing. But this help doesn’t come cheap. The company charges anywhere from a lofty $999 for one quiz a month, to an ever more daunting $2,999, which gets a site four quizzes per month. At these prices, Shopping Quizzes definitely isn’t the right solution for your handcrafted Etsy store. But, if you run a legitimate e-commerce operation and think your large, disorganized inventory is turning off hundreds or thousands of potential customers, this may be the answer for you. 2016-05-02 20:16 Fitz Tepper

15 Shyp rolls out its new packaging pricing model to all customers Shyp has finished rolling out a new packaging pricing model to all its customers that includes variable shipment pricing based on the packaging that the company provides. Shyp is still playing around with its business model and, to be sure, over time these tweaks will add up. The unit economics of on-demand services aren’t quite as simple as a $5 fee (plus postage) and require a more granular approach, rather than a one-size-fits-all model. This isn’t the only change Shyp has made to its business model over time — it recently instituted an additional $5 handling fee for its online returns. Of course, all this is constantly subject to change as Shyp works to figure out the best way to make money without alienating its user base. The new pricing has been rolling out to batches of customers over the past four months, but today the company finished rolling it out and laid out the new pricing in a blog post. Shyp began rolling out the new pricing to its customers around January. The company teased in a Fast Company article that it would begin more modular pricing for its packaging around that time. The remaining customers were notified via email about the new pricing today. Basically, for larger or more-fragile packages, the packaging cost will be more expensive. This, of course, makes a lot of sense — it’s harder and more expensive to ship a bike than it is to ship a laptop or a small set of silverware. And Shyp couriers have to be more careful with fragile packages, which is more time- and resource-intensive. So it’s not surprising that the company would make some changes to its shipping model as it looks to become profitable in the cities in which it operates. The final rollout coincided with a blog post formally announcing the changes to pricing. Customers who package their own shipments still won’t pay for packaging fees, and will instead just pay the $5 for the cost of the at-home pickup and insurance. A Shyp representative said that since rolling out the new packaging fees, the company has seen a 10 percent conversion increase among users who have had the pricing update. That means users who have had charging for packaging, and an explanation for what the costs are, in-app sessions convert to an actual pickup an additional 10 percent of the time. The company credits this to being confident about the packaging, a representative said. Shyp has also signed a number of large-scale deals that will help improve its shipment volume. Shyp recently formally integrated with eBay, in which it allows sellers to ship their packages through its pickup service. Shyp’s shipments will still be free for eBay sellers until June 30, and nearly half of the people who used the integration during the pilot had never sold on eBay, the company earlier said. Existing sellers shipped 60 percent more products during the pilot, the company said. All this is in search of ways to improve the company’s business model, which has proven to need a somewhat different approach to reach profitability than its original pitch. The company recently pulled out of Miami, though it told Fast Company in an earlier article that it was growing around 20 percent month-over-month. Shyp last raised $50 million in a financing round that valued the company at $250 million — and if it’s going to justify that valuation, it’s going to need to continue tweaking its business model as it searches for a way to become a profitable, long- lasting company. 2016-05-02 20:16 Matthew Lynley

16 Bessemer Venture Partners Byron Deeter on success in SAAS Following an introduction from SaaStr’s Jason Lemkin, last week I had the chance to catch up with Byron Deeter of Bessemer Venture Partners. Deeter puts successful SaaS models into two main buckets: companies that adopt the ‘SaaS for X’ model, taking something done by other companies, even other software companies, and doing it better; and companies that do things that are only possible with the cloud and with mobile enterprise. Deeter notes that Bessemer has backed both types of company, and has had great returns from both. The defining feature of successful SaaS companies in his mind is not the model they adopt, but whether they are able to achieve ‘efficient growth.’ For Deeter, efficient growth means from Series B onwards at least a dollar of ARR growth for every dollar of net burn. As companies mature this also means drawing more from the existing customer base and relying less on new customers. This is the reason for the growth in customer success managers and teams, as companies look not just to reduce churn but consistently up-sell and deliver better results to their existing customer base. In Bessemer’s analysis this efficient growth has always been rewarded by the market, but as cash becomes more constricted, particularly at the late stage, this efficiency will become more and more important, Deeter said. At the early stage he predicts that VCs, including Bessemer, will not drastically change what they look for or how they deploy cash, but later stage companies will be punished for inefficiency. 2016-05-02 20:16 Harry Stebbings

17 Videorama makes editing mobile video actually fun Smartphone users shoot a lot of video, but turning those videos into something special still takes a lot of work. A newly launched app called Videorama aims to solve that problem by offering a powerful video editor that’s also super simple to use. In other words, you don’t need to be a video pro in order to do things like trim videos, add text overlays, music, or special effects. Aimed at beginners to video editing as well as social media marketers hoping to capitalize on Instagram’s new support for 60-second video , Videorama is like the video equivalent to graphic editors like Canva or PicMonkey. While the former two services let you create your own graphics for sharing on social media and elsewhere, Videorama is about taking your video footage, then spicing it up before putting it in front of a wider audience. The app is fast and straightforward to use. You can combine your photos and video together, if you choose, trim videos, and preview your changes in real-time. A larger feature set is available via in-app purchase, however. You can pick which features you want to buy individually – for example, while full HD support is free, many overlays cost $0.99 as do special effects and music packs. But the app provides a robust starter kit, so you don’t feel like there’s nothing to do in the app without forking over money. That said, there are a tons of these premium packs available, which let you do things like add explosions, electric bursts, magic spells, weather effects like rain or snow, and much more. You can also make your movie look like old black-and-white footage, or make it look like it’s burning, among other things. And you can buy music packs or import your own tunes. A movie fonts pack is also available for $0.99. Removing the watermark costs $2.99, which is the most expensive in-app purchase. Videorama comes from a two-person indie developer team based in Istanbul, Apperto, who previously created the Typorama app for making typographic art. That app was featured by Apple and saw over a million downloads in a year. This new app is their big follow-up, they told TechCrunch. “We’re cinema geeks and we always dreamt of an app where you can add Hollywood-style explosions, magic spells, weather FX etc to your home-made movies,” explains Apperto founder Sarp Erdag. “We combined the idea with the style of a desktop quality video editing suite and the result came out to be Videorama.” The app quietly launched around a month ago, but it has seen 130,000 downloads over the last five days thanks to being featured on the iTunes App Store. The team has not taken investment, but has received acquisition and partnership offers following its release from big brands, we understand. Videorama seems to be most competitive with apps like Cameo, the video editor Vimeo bought back in 2014 and upgraded last summer. However, where Cameo is largely focused on editing video that’s then uploaded to its own network, Videorama isn’t associated with a particular brand. Its videos are instead easily shared to Facebook, Instagram, email, messaging, or elsewhere, via iOS’s share sheet extension. The app itself is a free download on the iTunes App Store, with optional in-app purchases. 2016-05-02 20:16 Sarah Perez

18 Venture investments in new manufacturing technologies could reshape American industry A wave of venture investment into new manufacturing startups looks set to transform American manufacturing. While the foundations for these companies may have been laid in cities like Boston, New York and San Francisco, the startups that are driving this next industrial revolution hail from more unlikely hubs of technology innovation in the smaller urban centers of the Sun Belt and the Southeast. These include cities like Lexington, Ky., in states whose economies were ravaged by the 2008 financial crisis and see redemption in the entrepreneurial energy of startup businesses. New industrial processes, such as on-demand machining and additive three-dimensional printing, may have a tremendous effect on the U. S. economy. Roughly 33 percent of the economy is fueled by manufacturing and it’s one of the arenas that has been most resistant to incursions from the technology world. Now, all of that is changing for several well-documented reasons. The cost of hardware and infrastructure to support the application of technology in manufacturing has come down dramatically even as organizations are looking to improve efficiency by collecting more data on their processes and determining where there are costs to be saved. Supplying that information and those services are businesses like MakeTime , a Lexington- based startup that recently raised $8 million in new financing led by the Colorado investment firm, Foundry Group . Founded by chief executive Drura Parrish, an architect turned manufacturing entrepreneur, MakeTime is an online capacity utilization marketplace for machining. The company provides a way for computerized machining companies to offer their manufacturing services for customized parts during times when those machines would typically sit idle. By distributing those orders across a number of different manufacturers during their down-times, MakeTime potentially provides a way for companies to do larger production runs at lower prices. Parrish cites his company’s emphasis on using data to provide better pricing information and deeper insights into the true costs of manufacturing as one of the keys to MakeTime’s success. Both additive manufacturing (3D printing) and subtractive manufacturing (machining) are set to play a large part in the future American economy. The machine-tool market is currently $70 billion, and while 3D-printed products were only worth roughly $5.2 billion last year, that number is expected to reach $550 billion by 2025, according to a story in The Economist this week. Driving that value will be early-stage startup companies like CloudDDM and MakeTime, and more mature 3D printing services companies like the New York-based Shapeways and its earlier-stage compatriot in the Big Apple, Voodoo Manufacturing . If the Lexington-based MakeTime is a company making traditional manufacturing cheaper, more efficient, and better able to meet the needs of just-in-time (or on-demand) industrial processes, then just 72-miles-away in Louisville, CloudDDM is trying to make its own mark on the manufacturing business with 3D printing. Backed with $2.5 million in financing from UPS CloudDDM uses its own 3D printers and CNC machines to make parts for customers in plants located minutes away from UPS’s logistics hub in Louisville. “UPS sees the writing on the wall,” says CloudDDM executive Mitch Free. He explained that the company’s supply chain business is aware that as manufacturing becomes more on-demand, companies will either look to manufacture parts on premises and as-needed, or potentially expect their logistics provider to significantly cut down the time to get a part. Just as UPS is experimenting with 3D printing as a way to improve its position in the logistics market, GE Appliances (now owned by the Chinese durable goods manufacturer Haier) has made its own 3D lab in Louisville for prototyping new products with FirstBuild. Initially, a collaboration between General Electric and Local Motors (a Phoenix-based manufacturer of 3D-printed cars), FirstBuild takes the principles of crowdsourcing and crowdfunding that the company toyed with as a partner and investor in the once-mighty startup Quirky , and uses them to develop prototypes of new household appliances. A critical component of that process is the use of 3D printers. When GE was getting ready to release its Opal nugget ice machine, several of the initial components were made with 3D printers, so the company could bring the product to market more quickly. “FirstBuild has been a real amazing win,” said Karen Kerr, a senior managing director with GE Ventures. “And a great win for how the ventures platform is really helping to transform how we do business with GE.” Earlier this year Kerr told me that General Electric was working toward launching a similar initiative called NextBuild, which would take the rapid prototyping ethos from the FirstBuild program and apply it to the company’s industrial businesses. Other mammoth-sized industrial companies are joining GE and throwing their weight behind Local Motors. Earlier this year the company raised an undisclosed amount from Airbus Ventures in what was the first publicly disclosed investment from the newly launched venture firm. “Ultimately it’s going to benefit us all,” says MakeTime’s Parrish. “The goal of this whole game is to democratize the manufacturing floor to make things faster, better, and cheaper for a generation of entrepreneurs.” To make that happen, he adds, the whole manufacturing chain will need to be digitized. “It’s all going to play a part in Just in time or on-demand manufacturing,” he said. “We all rise up and we march to the hallowed ground and we bring our manufacturers home to the promised land.” 2016-05-02 20:16 Jonathan Shieber

19 Chegg acquires Imagine Easy Solutions, the company behind EasyBib, BibMe and Citation Machine The online textbook service Chegg today announced that it has acquired Imagine Easy Solutions for $42 million. Imagine Easy is the company behind online bibliography and research tools like EasyBib (which was also its first product) and similar tools like Citation Machine , BibMe and Cite This For Me , most of which it acquired in the last couple of years. Imagine Easy also offers teaching tools for helping students develop reading and writing skills through its Imagine Easy Academy and Imagine Easy Scholar products. Chegg will pay $25 million up front and $17 million in deferred payments — with another $18 million of potential payments over the next three years that depend on whether the team meets its goals. In the last 12 months, Imagine Easy’s bibliography and research tools powered about 240 million sessions and EasyBib alone saw more than 7 million unique users in March 2016, Chegg tells me. In total, all of these services together have helped students from mangling more than 1.4 billion bibliography entries. Imagine Easy’s business model is based on a mix of subscription fees and revenue from online advertising on its sites. Chegg makes it easy to buy, rent and sell textbooks and e-textbooks online. But sooner or later, some pesky professor (or, these days, more likely a teaching assistant or adjunct ), is going to ask you to do some research and write a paper based on what you’ve learned. To make matters worse, you’ll probably have to attach a bibliography, too. Thankfully, tools like EasyBib make that pretty easy these days, so you don’t really have to worry about the difference between Chicago- style, APA-style and MLA-style bibliographies anymore. While these tools are clearly Imagine Easy’s most visible services, Chegg today noted that the company’s tools for teaching writing skills are also an important reason for acquiring the service. “We know that writing is one of the biggest pain points for students today, with about one quarter of all college freshmen required to take remedial writing courses and employers rating less than 30% of recent graduates as being well prepared written communicators,” said Dan Rosensweig, Chairman and CEO of Chegg, in today’s announcement. “Inability to write at the college level is a leading indicator of which students will eventually drop out, a situation that adversely impacts both the student and the school. With this acquisition, Chegg now has the ability to serve the millions of students who depend on writing help every day, in particular those students required to take remedial writing classes.” 2016-05-02 20:16 Frederic Lardinois

20 Capital One: Think Like A Designer, Work Like A Startup When you think "tech startup," a financial institution that's nearly 30 years old and employs 40,000 people is hardly what springs to mind. Yet a startup culture is exactly what Capital One Financial is cultivating in its approach to tech advances such as the Capital One Wallet mobile app. That's only one way the financial institution is breaking the mold in its industry. Capital One Financial has also embraced Design Thinking, a groundbreaking approach for developing goods and services. The company sends executives to Stanford University's Hasso Plattner Institute of Design for training, and then incorporates the methodology throughout the organization. "Fundamentally, our products are customers' experiences that are principally distributed through software," said Rob Alexander, CIO of Capital One Financial, in an interview with InformationWeek. Recognizing that helped the organization focus on building software that could leverage the data and analytics about its customer relationships and -- most importantly -- place these skill sets right alongside marketing, customer relationship management, and brand-building as crucial elements in the company's strategy. "With that understanding, we embarked on a shift from IT to technology," said Alexander. "Over the course of this transformation, we rebranded ourselves from the IT department to Capital One Technology. " A typical bank organization will largely procure third-party software for its internal and customer- facing operations, Alexander said. Capital One, on the other hand, is "an organization that truly builds its own software and develops its own solutions. That is a different DNA," he said. Capital One Wallet can be seen as a microcosm of this transformation. "It reflects how we operate today, [which] is very different from how a typical bank operates," Alexander said. To hear Alexander describe what that looks like at Capital One is an exercise in cognitive dissonance. You know you're talking to the CIO of a large financial institution, but the words he uses sound more like the eager story of a Silicon Valley startup: All of these factors came to bear in the creation of Capital One Wallet. "We built Capital One Wallet entirely native, first for iOS, then Android," Alexander said. "We were first to have a wallet app with Android tap-to-pay incorporated into the app, and we did it all in under nine months. " Capital One Wallet is designed to help customers track spending. It flags potential fraudulent activity through real-time alerts sent when either a physical credit/debit card or Apple Pay or Android Pay is used. Customers can see a running list of their transactions, which -- unlike traditional banking apps -- instantly displays enhanced information, such as the true merchant identity, location, and contact information. The app also gives customers the ability to instantly redeem rewards points for digital gift cards, apply them to past travel charged to their credit card, or use them to pay off a statement. It also included Apple Touch ID as part of the initial launch. The app offers either Touch ID or SureSwipe, which uses a customized nine-dot pattern sign-in, rather than a standard user name and password. The Capital One Wallet uses a different identifier than the physical plastic credit/debit card, so there's no need to cancel the card if the phone is lost, and vice versa. In addition to the iOS Capital One Wallet, the company released capabilities to integrate with the Apple Watch. Capital One Wallet for Android devices is also one of the first mobile tap-and- pay solutions in the US to use host card emulation, which is Android's equivalent of Apple Pay. Collaboration among various stakeholders was a key element in the development of the Capital One Wallet -- and the company's ongoing design practices. The project began with a series of design sessions intended to uncover the unarticulated needs of customers and define a problem that needed solving. Involved in the design sessions were associates from Capital One Technology and Digital, as well as business associates from the company's bank and credit card businesses. Scott Totman, head engineer on the Capital One Wallet project, joined the company three and a half years ago. He was one of the professionals selected to attend classes at the Stanford University Institute of Design. "What I learned coming out of it is that you're not asking the customer what the next feature is that they'd like to see," Totman said. "You're asking the customer to tell you a story about their relationship with financial services and money. You listen for pain points, purchasing points, and the customer, Page 2: The challenge of real-time 2016-05-02 20:07 Susan Nunziata

21 The Weather Company Brings Together Forecasting And IoT The Weather Company and its project to modernize its data collection, storage, and forecasting platform won recognition in last year's Elite 100, coming in at No. 5. So it's no big surprise that, a year later, The Weather Company is in the top 5 again as the company continues to build on its previous success. Its expanded ambitions involve its new parent company, IBM, and a plan to apply Watson cognitive computing to the Internet of Things (IoT). What does all that have to do with weather? "Weather, at the end of the day, is the original big data problem," said CIO and CTO Bryson Koehler. Koehler joined The Weather Company in July 2012 to update the company's infrastructure of 13 maxed-out data centers and aging apps and turn it into a platform that could be leveraged by external companies. He led the team that created a new cloud-based, cloud-agnostic, data driven infrastructure for predicting the weather and providing API-based delivery of weather- related content. That project put weather prediction information in the hands of people who need it, and that's a lot of them, because weather affects every person, every industry, and every business in the world. The Weather Company estimates that weather is perhaps the single largest external factor affecting business performance, to the tune of nearly $1 trillion lost annually in the US alone. Combining weather data with business data can improve decision-making for a wide range of companies across many industries -- from retailers looking to stock the right SKUs and optimize supply chains, to insurance companies that want to advise policyholders about ways to minimize damages in the event of impending severe weather. Weather, it turns out, is only the first phase for The Weather Company's platform. With the rise of the Internet of Things (IoT), The Weather Company has positioned itself to move well beyond its original charter of predicting weather and helping companies make better decisions based on weather predictions. After all, weather stations are really IoT devices, Koehler said. They are equipped with multiple sensors for detecting barometric pressure, humidity, temperature, wind speed and direction, and other values. It was the potential of the platform for IoT data that first piqued the interest of IBM, Koehler said. IBM announced plans to acquire The Weather Company in October 2015, saying that it would use the company's platform as the basis of its IoT effort for its cognitive computing business, Watson. The deal closed in January. The original partnership between IBM and The Weather Company grew out of a conversation at a bar in Las Vegas during a big data event in October 2014. Koehler was a speaker at the conference, and at the end of the day he had drinks with Mac Devine, CTO of the IBM Cloud organization. Devine told Koehler that IBM wanted to build an IoT capability for its Watson cognitive computing platform, and the company wanted it right away. IBM was trying to move beyond bespoke solutions created anew for every single client and into an API-driven creation of applications for clients. "I said, 'Well, don't go build that, I've already built it,'" Koehler said he told Devine. "'Why don't you just use our platform?' And he said, 'Well, that's great but isn't that just a weather platform?' And I said 'No!'" That conversation led to a partnership between The Weather Company and IBM, announced in March 2015, that let IBM license use of the data platform The Weather Company had built, Koehler said. The agreement also included The Weather Company moving its platform to IBM SoftLayer, the tech giant's cloud platform. (The Weather Company's data platform operates on both the IBM cloud platform and on AWS today. A spokesperson said the company plans to expand it to run on other cloud infrastructure platforms, too.) IoT and cloud computing enabled collection of data from more than 100,000 weather sensors and aircraft, from millions of smartphones, buildings, and moving vehicles, IBM noted in its announcement of the partnership with The Weather Company last year. Weather information and IoT information work in similar ways, and that's what is enabling the extension of The Weather Company's modernized platform, originally called SUN (Storage UtilityNetwork) and built to be hosted on any public cloud service. "We built this very agnostically," Koehler said. "If you think about the weather, the data types that come into weather are all over the place -- lightning data, pollen data, radar data, which is streaming data. We have model data that comes in. We have crowd-sourced user-generated content data where people are doing crowd reports. We have personal weather stations, which send us data, which are really just IoT devices. So we have hundreds of different types of data, hundreds of terabytes a day coming in. " The weather data, storage, and prediction platform that Koehler's team had built was effectively an IoT data platform. "Yes, we applied the platform to building weather forecasts and powering our mobile applications and our B2B applications," Koehler said. "But fundamentally the work we do to protect and prepare people and businesses for tomorrow is no different than what any business application is trying to do as it looks at vast amounts of data, creates insights off that data, and then helps a business make a smarter decision. " Koehler said the goal is to move from simply predicting what will happen to actually helping businesses use that information to make smarter decisions. The partnership with IBM, announced in March 2015, soon turned into much more. "Everybody quickly recognized that this would be really powerful if we went deeper," Koehler said. "And because we were talking about something important, we really had to get to a place where all the intellectual property was owned by the same company so we could completely share it. " IBM announced in October 2015 that it would acquire many of the assets of The Weather Company — all the forecasting, analytics, and data assets, and also all the brands and applications other than The Weather Channel television property. The deal included The Weather Company's B2B mobile and cloud-based Web properties, including the professional services arm, WSI; the weather.com website, Weather Underground, which includes the network of personal weather stations; and The Weather Company brand. The Weather Channel TV segment was not included in the purchase, but continues to license weather forecast and analytics from IBM under a long-term contract. When the deal was completed in January 2016, The Weather Company Chairman and CEO David Kenny was named to lead the IBM Watson organization, one of the core businesses in which IBM expects to drive growth for the company in the future. Kenny said then that his goal was to make Watson into a more cohesive product that would offer Artificial Intelligence-as-a- Service. "What Watson knows today is awesome," said Koehler. "But for Watson to continue adding incremental value to its customers, Watson has to get smarter. " The Weather Company's platform has added new data, architecture, and analytics to the mix. Its platform leverages machine learning and 249 different open source tools, according to Koehler. In addition, it includes proprietary capabilities. Most of the platform was written in Scala, and a few of the technologies it leverages include Cassandra, Spark, Riak, and Redis. In the months since the acquisition was completed, Koehler's effort has been very much focused on the work required for building out the IoT insights platform for Watson to support internal groups across IBM. IoT is one of many platforms built on top of the Watson cognitive computing platform, which Koehler explained is designed to have many hundreds, or even thousands of applications built on top of it. "So if you look at Watson IoT or Watson Health, those are Watson applications. And there will be hundreds of those. They are distinct applications solving specific needs in specific industries that have specific use-cases. " That's the idea behind Watson and the concept of Artificial Intelligence-as-a-Service. "When you think about moving from what was once BI and analytics, and now you are talking about insights and cognitive computing capabilities, we want to bring that to life," Koehler said. "Cognitive computing is about taking it to action. It's the ability to enable a business to react in real-time and do something different. " What advice does Koehler have for CIOs looking to bring together data, cognitive computing, and IoT in their organizations? "Be brave. Be bold. Don't spend your time creating a new shade of red, but rather create an entirely new color," he said. "So many IT shops are in need of radical change and transformation that will only come through taking bold steps with high velocity. Clearly this comes with some risks, but I believe that playing it safe is far riskier than taking on some large risks within your organization. " 2016-05-02 20:04 Jessica Davis

22 Horizon Prioritizes Data And Patient Experience The healthcare delivery system traditionally has adhered to a fee-for-service model. Physicians are paid based on quantity, not quality, of treatments, which some observers say has led to a decline in the value of care and a rise in healthcare costs. Horizon Blue Cross Blue Shield of New Jersey, recognizing that this system was financially unsustainable and didn't reward care quality, decided to turn the old model on its head. Its efforts to move to a fee-for-value model followed the launch of the Affordable Care Act (ACA) and the start of healthcare reform. The organization's CIO, Doug Blackwell, calls the effort a strategy to hit the "triple aim. " "It's really getting into three different components," Blackwell explained. "The first is the overall improvement in the quality of healthcare, the second is improvement in the total cost of care, and the third is enhancing the patient experience. " The ACA kick-started healthcare reform by providing various benefits and capabilities to customers, Blackwell said. Eventually, though, the Newark, N. J.-based insurer was left asking a pivotal question: Where to go from there? Its answer: Horizon's Health Care Value Strategy, an enterprise-wide initiative designed to achieve the triple- aim goal and to provide technical and nontechnical capabilities that improve healthcare delivery while lowering costs for patients. Horizon's innovation efforts began in 2011, when it adopted a startup approach to launch and scale programs and products aligned with its pay-for- value strategy. These programs and services included patient-centered medical homes, accountable care organizations, and episodes-of-care models. The Health Care Value Strategy officially started in early 2013, with the goal of culminating in time for the 2016 open enrollment period for health insurance. All 5,000 employees at Horizon were trained for the project. Business and IT leaders were faced with the task of determining which types of technologies would be used to support the pay-for-performance model, provide customers with better information to make healthcare decisions, and improve their experience. "All systems across the board needed to be addressed," said Blackwell, noting that the strategy affected every aspect of product development. More than 80 business processes were created or modified as part of the project, which transformed systems for enrollment, claims processing, billing, customer service, provider portals, sales, and benefit monitoring. When adopting new technologies, "very rarely do we build everything ourselves," said Blackwell. "We typically partner or go to a third party. " The decision to buy or build depends on in-house capabilities, but all integration is done internally. For this project, Horizon employed a buy, build, and partner approach. It focused on five key areas: information sharing and analytics, clinical excellence, organizational alignment with provider partners, patient engagement, and improved member experience. Improving on analytics capabilities was crucial. The goal was to distribute high-quality, real-time information to members and doctors, who could use it to quickly and effectively treat members. For this, Horizon enhanced its Teradata platform and added Tableau for visualization. It also built extensive data warehouses to centralize claims information, which could be used to understand the needs of the member population and determine how to care for them. "Rich data warehousing and analytics capabilities were, first and foremost, critical for us to establish," said Bud Baumann, IT business solutions officer at Horizon. Clinical excellence is also core to the pay-for-value process. Horizon improved its ability to measure patient care through chronic care management and utilization management. The process involved improving its care management platform, EXL Healthcare's CareRadius Suite, and integrating its Care Affiliate product with NaviNet, Horizon's multipayer provider portal. To build organizational alignment with provider partners, Horizon uses performance-driven reimbursement and financial risk management, employing SAS Enterprise Business Intelligence, Teradata, and Informatica custom development. For outcome-based payment, information is shared via Microsoft. NET apps, NaviNet, and IBM WebSphere-based Web service integration. A major part of the project consisted of revamping the organization's Doctor Hospital Finder, which was "messy and unclear" before the project started, said Blackwell. The insurer aimed to make it easier for patients to understand which healthcare providers were part of their plans so they could make more informed decisions. This involved strong use of the Axure prototyping tool, which allowed early-stage versions of the app to be tested among members so Horizon could receive feedback. The project involved close collaboration between the business and IT divisions. Baumann sits between the two and helps the business launch tech projects to solve enterprise problems. Also influential in this initiative was Naveen Paladugu, director in the Strategic Initiatives Group at Horizon. Projects such as redesigning Horizon's provider portal and improving cost transparency have resulted from the collaboration between business and IT. When the business has a strategy it wants to execute, it approaches IT to implement a technology solution, Paladugu said. "This was a very complex and very challenging business and technology effort," Baumann said. There are several steps to executing a strategy of this magnitude, one of which involved monitoring earlier investments to see what needed to change. "We constantly take the feedback around how well those programs work, which feeds into the Strategic Initiatives Group," he said. "We have a very strong Voice of the Customer program, where we poll members on a constant basis around what's working and what's not working. " The teams faced obstacles with communication, pressure to deliver on tight deadlines, and the stress of aligning resources so there were enough of them to dedicate to each task within the broader Health Care Value Strategy initiative. "For us to be successful, it was really about alignment," said Baumann. "If you're going to break this up and have as many different work streams, it's important to have project managers to drive, to stick to plans, be flexible, and highlight risk early. " Over the course of the project, Horizon appointed executive steering committees and project steering committees, which regularly met over a two-year period. There were few surprises in building the technology despite the program's size, said Baumann, as a result of the leadership and risk mitigation. "You don't attack this as one big major program; you break [it] up into smaller projects," Baumann said. That way leaders can see iterative deliverables throughout the process, see progress along the way, and rely on the structure of dedicated teams to drive accountability. "Good luck," joked Blackwell when asked if he had advice for CIOs hoping to start a similar project. In seriousness, he acknowledged the importance of collaboration between business and IT to launch the Health Care Value Strategy. "The key is the tight integration between the folks like [Paladugu], who are thinking of business strategy and where things need to go, as well as our IT partners, helping them co-develop IT strategy and business strategy simultaneously," Blackwell said. He also advocates support from the top. "Our CEO, Bob Marino, was personally involved in every aspect of this new development, these products, the strategy, working closely with all the different divisions of the organization. " Accountability is also critical, added Paladugu. The business and IT divisions each typically have several people calling the shots, sometimes confusing or contradicting each other. Appointing a singular leader for each side, and having these people assign responsibilities, ensured that all divisions were working toward the same goals. Executives can't definitively say how successful the Health Care Value Strategy has been, but so far its results are promising. "It's very early on," said Blackwell. While he couldn't describe how the strategy has affected cost, he noted premium costs are lower and, in many cases, there are lower copays or no deductibles for customers. Following the deployment of the Health Care Value Strategy, Horizon has two major ongoing projects. The first is the implementation of a private health information exchange called the Horizon Data Services Platform. Unlike publicly provided HIEs, this launch is Horizon's own implementation. Blackwell described a longitudinal patient record, which involves integration between electronic medical records and hospital systems to provide common information to members and physicians. The team is building a highway for information to pass and layering analytics to consolidate data and conduct retrospective and prospective analysis. The health insurer is also working to revamp its entire digital strategy, Blackwell noted, something that all types of businesses are doing, not just insurance companies. Horizon is in the middle of this transformation, which has been ongoing for about one year. "Toward the end of this year," Blackwell said, "we'll start to roll out those enhanced capabilities using a digital platform, using mobile, and taking advantage of preference management and some of the other new things that are out there today. " 2016-05-02 20:01 Kelly Sheridan

23 Vue.js lead: Our JavaScript framework is faster than React To take on Facebook's vaunted React JavaScript library , the Vue.js front-end framework has been upgraded with a virtual DOM layer improving speed and memory consumption. Version 2.0 of Vue.js, recently released as a public preview, is leaner and faster than before, according to a bulletin on the technology. "The rendering layer is now based on a lightweight virtual-DOM implementation that improves initial rendering speed and memory consumption by up to 2- in most scenarios. " Mainstream virtual DOM implementations suffer from performance issues like re-rendering, requiring optimizations, Vue principal developer Evan You said. "Vue 2.0 tackles this problem by combining virtual DOM with its reactive dependency tracking system, so that the system can automatically and efficiently determine when and what to re-render, freeing the developer from unnecessary optimization work," he said. Version 1.0.0 of Vue.js was released in late-October. The upgrade offers both template-based syntax and programmatic rendering with JSX or hyperscript, so developers get maximum flexibility in terms of development style, said You. "Performance-wise, it offers faster rendering than React with minimal optimization efforts. It also offers server-side rendering, a feature deemed the holy grail of universal JavaScript applications. " Vue 2.0's built-in streaming server-side rendering enables rendering of components while getting a readable stream back and directly piping it to the HTTP response. You stressed Vue 2.0 also provides a light footprint and a "forgiving" learning curve. Featuring its core view layer, tools, and libraries, the framework's template-to-virtual-DOM compiler and runtime can be separated; developers can precompile templates and ship applications with only the runtime. "The compiler also works in the browser, which means you can still drop in one script tag and start hacking, just like before," said You. "Even with the compiler included, the build is sitting at 17kb min+gzip, still lighter than the current 1.0 build. " Source code for Vue 2.0 is available on GitHub , with a beta release eyed for May. The documentation warns that developers will have some work to do to migrate an existing app from Vue 1.0 if they use deprecated features. Detailed upgrade guides are planned for the near future. More about Facebook 2016-05-02 20:00 www.computerworld

24 Penn Medicine: Using Data To Save Patient Lives Heart failure is serious business. According to the Centers for Disease Control and Prevention, half of all people with heart failure die within five years of diagnosis, and treatment in the US costs the nation around $12 billion every year. Philadelphia-based Penn Medicine, the country's oldest hospital organization, decided to use big data to do something about it. Penn Signals is a system that uses existing data from electronic health records (EHRs) to perform real-time predictive analysis of heart failure patients. The goal? Penn Medicine wants to place patients in risk groups and assign them to cardiology resources in order to get them the best care and improve their outcomes. The change from paper records to electronic ones provided an opportunity to do more than just change the medium for patient files, said Dr. Bill Hanson, chief medical information officer at Penn Medicine, in a telephone interview with InformationWeek. Penn Medicine wanted to use newly digitized records to improve patient care, Hanson said, and "one of the ways was to enable the backend systems to constantly scan to provide information that the frontline provider might not see or might not have access to. It's in the same way that Netflix or Amazon might look for patterns in what you watch or buy to offer suggestions. " Penn Medicine, which is made up of the University of Pennsylvania's Perelman School of Medicine and the University of Pennsylvania Health System, has more than 2,500 hospital beds and more than 31,000 employees. Penn Signal's foundation was laid in a decision that precision medicine was a goal of the organization, said Mike Restuccia, Penn Medicine VP and CIO. "It's a hard decision, and it requires some strength in leadership. Our dean took the decision that precision medicine was the goal and led us in that direction," Restuccia said. "It ruffled some feathers. Sometimes it works and sometimes it doesn't. " That decision has ended up guiding the actions of Penn Medicine's roughly 600 IT employees, Restuccia said. "Our strategy … was we wanted to have common systems across the enterprise, centrally managed and collaboratively installed, with the understanding that once we had the foundation in place, we could then build off the foundation to do useful things with the data. " The "useful things" that Penn Signals was created to do revolve around improving the results for heart failure patients. Clinicians (physicians who see patients, rather than research physicians who work in labs) wondered whether there were patterns in patient data that could identify heart failure patients earlier, identify various levels of risk in existing heart failure patients, and help clinicians do a better job of assigning those patients to care teams and treatment protocols. Getting to information that clinicians could act on meant diving into big data, and that required data scientists. Penn Medicine began by hiring Mike Draugelis as its chief data scientist. The opportunity to have access to both rich data and clinical professionals drew Draugelis away from a career in aerospace and national defense and into healthcare, said the former chief data scientist at Lockheed Martin. "When we started this project, the chief medical officer of the hospital asked the newly formed data science team to work with the service lines, and the heart and vascular team was the first," Draugelis said. Cardiac care was chosen first because the clinicians there knew they had a problem, he explained. When the cardiac team can properly treat a patient in the early stages of heart failure, the results are much better. But many patients were coming into the health system and never being referred to the cardiac team because they came to the hospital for an issue that had nothing to do with their heart. "The first thing we did was look at how many people were coming through the system not identified with having a chronic heart problem," Draugelis said. The team then looked at all the information in the electronic health records to see which of those patients had the markers for heart failure. If the markers were there, then the patient could be flagged for review by the cardiac team. Building the system meant combining the strength of the data and the data science team, Hanson said. "We had to invest in [the data science team], building the engine that spots or creates the patterns that say that a patient is at high risk for readmission, but we were leveraging the existing EHR system," he said. "That includes all the demographic information, the lab information, and the diagnostic information that goes with the record. " Draugelis' team of six data scientists and developers, working alongside the clinicians on the project, looked at eight years of clinical data to create the algorithms needed to correctly identify patients' risk levels. "When you're doing this kind of development, it's important to have the team embedded with the clinicians so we can have the conversations and converge on the best plan for the patients," he said. "It's not just the data science team that makes this successful. It really is an integrated team with the IT group and the clinical teams that provide the concept that they want to bring to the patients," Draugelis said. The fact that each of those teams can be small is another piece in keeping with Penn Medicine's general philosophy of IT management. Brian Wells, associate VP of health technology and academic computing at Penn Medicine, leads the teams that provide the technology supporting Penn Signals. "We like the idea of small teams," Wells said. "Mike [Draugelis] has a small team of two or three people involved in this. The clinical team that validates the results is small. We feel that teams of two to four people are more efficient. " The technology those small teams are providing is largely homegrown and flexible. All the clinical data, both historical and real-time, is in Penn Data Store, a system built on Oracle and IBM Data Stage that accepts and transforms the data every night, Wells said. "It's billions of rows of data and millions of rows every night, so it's about a day behind," Wells said. Data comes out of this data warehouse via a system developed in-house called Clinstream. For Penn Signals, Clinstream transfers the data into a big data system built on MongoDB. The work of the small teams collaborating with one another has been an effective system for patients. "In terms of the accuracy and specificity of identifying the risk, it's exceeded our expectations," Hanson said. This is only the beginning. "Where we've lagged is figuring out how to put [together] the end-to- end process of how to ingest data, spit out results, and deliver the results to people to get the right outcome," he said. "There's a continuum of things there, some of which are technical, some of which are workflow, and some of which are people. We're going to be continuously learning how to do that for the next decade. " In addition to continuing to learn how to make better use of the data at Penn Medicine, the organization's executives said they hope to spread the word about what is possible through data analysis. "One thing that I really want to make other health systems aware of is that they can do this too," Draugelis said. "It's hard, but not as hard as people think it would be. The keys are having the data, having the clinical team that's ready, and having them integrate the science with their patients. " Restuccia agreed that more health systems should be data driven. "At the highest level we feel institutionally that, if a health system isn't mining its data to guide clinical care, it's like leaving money on the table in a poker match. It's a shame that you did it," he said. The teams are critical, Restuccia added. "You have to have almost hand-to-hand, shoulder-to-shoulder monitoring of the initial results to make sure they're what you thought they would be," he said. "If they're not, then tweak the systems to bring them up to your expectations. " The upcoming generation of doctors is eager to embrace the clinical opportunities embedded in EHR data, Hanson said. "We're seeing a lot of interest from medical students and residents who believe they have models we should be experimenting with. Not surprisingly the younger people in training are the ones with the ideas about how we can take advantage of data to improve care," Hanson said. IT will have to be prepared to support those doctors, Restuccia noted. "It's going to become the norm. I am not a clinician, but it seems to me that, in the past, practicing medicine was as much an art as a science," he said. "The answers are in the data. The art is how you apply the data, and now clinicians have more data than ever before. " 2016-05-02 19:57 Curtis Franklin

25 FedEx Services Eases The Pain Of Customs Clearance As anyone who has traveled across borders can attest, clearing customs is not always the quickest or easiest part of the journey. So, too, for packages large and small that are imported or exported between countries. Customs regulations for shipments vary by country and can present a considerable challenge for any businesses involved in the importing and exporting of goods. FedEx Services set out in May 2011 on a project that aimed to make it easier for its business customers to navigate customs. The project, called Clearance Customer Profile (CCP), is designed to accelerate the passage of FedEx shipments through customs agencies. FedEx Services, with about 12,500 employees, provides IT, sales, and marketing support for sibling FedEx Corp. subsidiaries FedEx Express and FedEx Ground. FedEx customers must provide information to local customs agencies about goods being imported or exported. Back in 2011, that information was stored in a variety of FedEx systems and wasn't always easily accessible. Each country or region would have its own system, an inefficient approach that often required FedEx personnel to contact customers and consignees to obtain the information required by customs officials. "Basically, we had a very real problem in that we couldn't keep up with the demands of the customs clearance space because we were supporting too many local systems that did the same thing," said Paul Rivera, VP and CIO for FedEx Latin America, in a phone interview. "The systems weren't connected, and the information didn't flow or update accordingly. " The IT team recognized the complexity of managing disparate systems and brought the matter to the attention of corporate leadership, Rivera said. With strong support from the company's business units, FedEx Services made a plan to address the issue on an enterprise level. CCP debuted in April 2015, and by June it served 11 countries. The project involved FedEx team members from various departments in the US, Canada, and Latin America/Caribbean. Participants representing international clearance operations, brokerage entry filing operations, customer service, global trade services, and sales organizations were involved. Team members helped define the business requirements and participated in the user acceptance testing prior to launching each component. Regional legal teams were also involved to ensure compliance with local laws. CCP securely stores customs clearance instructions, product details, tax identification numbers, approved brokers, and images of regulatory documents such as powers of attorney, licenses, and permits. It supports five authentication levels that provide varying degrees of information access: Guest, Limited User, Broker User, Normal User, and Super User. "We are proud of the team's work on the Clearance Customer Profile," said Rob Carter, executive VP and CIO of FedEx Corp., in an email statement. "The CCP is a direct product of our internal IT transformation and simplification initiative and a great example of IT working closely with our internal partners to meet corporate goals and add value to the business. " (Editor's note: Carter is a member of InformationWeek's CIO Advisory Board.) Through its service-oriented architecture (SOA), CCP allows various FedEx systems around the world to obtain up-to-date information for customs clearance. To communicate with older systems ill-suited to SOA, CCP can provide notifications over the company's messaging infrastructure to maintain the accuracy of records. CCP relies on a variety of technologies, including J2EE, Hibernate, Oracle Database, Spring, Tibco JMS infrastructure, monitoring tools like HP OpenView Monitor and AppDynamics, Symphony tools, and internal FedEx development processes. But this isn't a story about specific tools. "The real power is not the individual technologies, but how we're assembled them into a more flexible IT systems model," said Rivera. "The power is in the way we used them to solve a problem. " At the moment, CCP is used internally, and what the customer sees on the other end is the company's ability to clear packages more efficiently. And that's no small thing. Rivera recounted the words of an executive he spoke with four years ago: "In transportation companies, you win and lose in clearance. " Nevertheless, FedEx Services is working to expose CCP to external customers for self-service. "It's a much better experience for a customer to self-manage that," Rivera said. The project has allowed FedEx to retire six of its twenty-six country- or region-specific profile systems, resulting in approximately $60,000 in savings on annual IT maintenance costs and $80,000 in savings annually in customer communications support costs. It also advances the company toward its goal of retiring its mainframe systems. Beyond the typical obstacles of dealing with a global project, one of the biggest challenges for the project team was scope. "When designing enterprise-level systems, you have to be very intentional about what should or should not be included," Rivera said. Rather than reproducing all the features found in legacy systems, said Rivera, the team worked to make the system modular and flexible. That makes it well-suited for handling regional requirements. As an example, he pointed to India, where clearance rules and regulations are managed at the state level, and relevant information has to be available for re-use at multiple points. CCP has been a boon for the company's 400,000 customers. It has reduced the average amount of time required to clear a shipment through customs by 20 to 40 minutes on a daily basis. This, in turn, has trimmed the amount of time shipments spend in bonded warehouses and the associated storage fees. With a 10% reduction in held shipments, FedEx has seen 16% fewer disputed invoices and customer issues. That translates to somewhere between $25,000 and $175,000 in annual savings, the company estimated. Rivera stressed the importance of listening to team members. "Many ideas are born out of necessity," he said. The project has paid off in terms of customer satisfaction and retention. FedEx Services said that customers have become familiar with CCP and now ask to be included in the database, knowing that inclusion will ease the transit of their shipments. Richard Smith, VP of global trade services for FedEx Express, is responsible for global clearance and regulatory compliance functions. In an email, he explained that FedEx is transforming its IT systems from one-off project solutions to systems that can be utilized across the enterprise. "The best advice that I can give to a CIO or IT leader is to look at systems from the outside in, as opposed to inside out," said Smith. "Design them with the customer in mind, whether that customer is internal (i.e., operations), external, or both. This will ultimately lead to a more elegant and flexible solution, as customer needs are varied and often rapidly changing. " 2016-05-02 19:54 Thomas Claburn

26 NYC tech giants band together to create industry lobbying group Some of the nation’s largest technology companies are locking arms to create a New York City -based lobbying group , one that AOL CEO Tim Armstrong and venture capitalist Fred Wilson believe is now necessary to better represent the city’s tech community before local and state governments. Armstrong and Wilson, who are serving as co- chairs of Tech:NYC , say the group’s objectives are two-fold. Primary goals are to support the growth of the tech sector in New York City, to increase civic engagement by leaders of the NYC tech community and to advocate for policies that will attract tech talent, jobs and opportunity to the city. Furthermore, the group will advocate for policies that underscore a regulatory environment that supports the growth of tech companies and tech talent in the city, promote inclusivity and ensure access for all New Yorkers to connectivity , technology tools and training. In announcing the group, Armstrong and Wilson point to a number of recent tech-related public policy issues in the city including a data breach bill, local drone regulation and how to handle ride-sharing and home-sharing services like Uber and Airbnb, respectively. The group is off to a solid start with an impressive list of founding members including AOL, Bloomberg, Facebook, Google and Union Square Ventures. Other members include Airbnb, eBay, Etsy, Kickstarter, Snapchat and Uber, just to name a few. Armstrong and Wilson said they aren’t alone in the challenge, citing recent creation of a similar group – sf.citi – as inspirational leaders in the movement. Image courtesy Francisco Diez, Flickr 2016-05-02 18:45 Shawn Knight

27 The Elite 100: Celebrating The People Who Make IT Happen The InformationWeek Elite 100 tracks the IT practices of the nation's most innovative organizations and examines their business practices across core areas of operations, including technology deployment, IT budgets, business technology infrastructure, and strategies. In reviewing the hundreds of submissions for our 28th annual ranking, we were impressed with the level of transformation happening in IT organizations across every industry sector. We see creative new uses of machine-to-machine and Internet of Things technologies. We see big data and real-time analytics being applied in ways that directly influence business decision- making. We see predictive analytics being used, quite literally, to save lives. We see applications being applied to global operations in ways that advance the speed of business. And we see digital services that are helping companies completely replace old business models. To talk about technology transforming business only tells part of the story, though. At the end of the day, it's the people behind the technology who are truly the agents of change. And that's where the most exciting shifts are happening. Design thinking is being applied to the creation of digital goods and services. The customer-first approach to IT undertakings is becoming universal. A transformation is happening in the skill sets and expertise being tapped to develop new products and services. That's not to say all is rainbows and unicorns in the world of IT. The pace of change has introduced new challenges -- such as managing an infrastructure that incorporates everything from mainframes to advanced mobile APIs. It’s put profound pressure on enterprise CIOs. In this year's survey of our Elite 100, we asked an open-ended question about the biggest mistakes made this year, and the 92 responses we received offer a treasure trove of learning. Here are some of the mistakes shared, which serve as a microcosm of very real IT pain points: These and other challenges aside, what we're celebrating today is the remarkable ability of IT organizations to adapt to a rapidly changing environment at an unprecedented pace. With our Elite 100 rankings, we pause to acknowledge the remarkable achievements of the people who make IT happen. 2016-05-02 18:31 Susan Nunziata

28 Industry Spotlight: Build performance tests into code Agile software teams can’t afford to clog their code pipelines with antiquated testing practices. While many shops have moved to Continuous Integration and Continuous Delivery, they also need Continuous Testing capabilities to achieve a continuous ecosystem. “Testing is no longer just about ensuring quality. It’s a gateway that allows you to move from one stage within your continuous process to another with confidence,” said Alon Girmonsky, CEO of BlazeMeter. BlazeMeter, which is well known for its open-source-based continuous performance testing capabilities, recently added enhanced functionality so developers can run performance tests and API-based functional tests as part of their Continuous Integration and Delivery processes. “The industry is moving to an era where everything is continuous. It’s continuous because you’re constantly moving from development to production in a relatively short time, but the shift to continuous practices has its challenges,” said Girmonsky. “You need different tools and infrastructure to support it. Tools like Jenkins, TeamCity, Bamboo as well as tools like AWS CodePipeline help you connect continuous integration and delivery with continuous testing.” Master Continuous Integration processes Continuous Testing is a vital part of an end-to-end continuous process. Rather than being a distinct step somewhere between a build and a release candidate, Continuous Testing, when integrated into the entire process, helps speed software delivery. It also helps ensure the reliability of code as it moves through the pipeline. For example, before code is committed to a repository, it has to be tested. If the code passes the tests, it gets committed. If it doesn’t, it has to be fixed. When testing is used as a gateway to check code into the repository, as enabled by BlazeMeter, developers can be confident that the code they commit will not break something else in the repository. Continuous Testing also helps developers avoid broken builds. When Continuous Testing is used during each phase of software development, it helps ensure the stability of the next stage. Finally, the release candidate is tested so it can be deployed or not as necessary. “We took it upon ourselves to make the process of testing easy and well-integrated into the ecosystem,” said Girmonsky. “Because we integrate with GitHub, Jenkins, and other tools, it makes sense to be part of the process.” By simplifying testing and integrating BlazeMeter into familiar tools, more people on the software team can take advantage of continuous testing—not just DevOps or QA engineers— but also developers within the engineering teams. “Essentially, we’re enabling an entirely new way to improve the outcomes of your continuous processes,” said Girmonsky. “It’s groundbreaking in its simplicity. You can build tests as you go —such as while you’re building your code—and get the instant verification you need. In the past, you’d have to wait for a performance engineer to add the verification to his to-do list and to check afterwards to ensure nothing breaks.” Why performance testing is important Adding performance testing to Continuous Integration reduces the risk of performance degradations when new features are added or a bug is fixed. For example, if the latest commit inadvertently causes sluggish response time, the continuous integration system will automatically mark the build as “failed.” The alert makes it very easy to identify the cause of a problem and who is responsible for it. When performance tests are run in the early stages of software development, it is easier to find and fix the issue. “Everything needs to happen automatically behind the scenes,” said Girmonsky. “If you’re a developer, you don’t even need to be aware of the suite of tools that are running in the background to make it all happen.” With BlazeMeter, developers can spend as little time as possible away from their IDEs. Within the IDE, they can build their tests and make sure the tests run in time—all in a matter of minutes. “This is a really serious enhancement to our product. What we’ve had up until now is a suite of tools, each of which does something to address a certain use case,” said Girmonsky. “Now we have a framework that allows you to use a human readable and writable language. With this language, you can define what your tests look like, when you want them to run, and what it means if a test fails, and what the criteria is for failing a test.” Automation is critical Software teams need the ability to run hundreds of tests at each stage of the software life cycle, and there’s no time to do it manually. BlazeMeter automates tests based on human-supplied criteria so organizations don’t have to sacrifice speed or efficiency. In fact, they can improve both. “When we were building systems in a waterfall fashion, we were releasing software every few months or every year. In that case, it made sense to treat testing as an event, and as part of the event, you’d create the test, run it, analyze it and invest a lot of time because it was only required every few months,” said Girmonsky. “In an agile environment, the event can happen every minute so you don’t have the time to write scripts and run manual tests.” BlazeMeter builds tests automatically and programmatically generates the tests in real time, according developer-defined specifications. “Developers create the specification, but the tests automatically generate themselves just before they need to run,” said Girmonsky. “This is one of the major value-adds you get with BlazeMeter.” One-up the competition Many organizations have become agile to stay competitive in today’s dynamic business world. With Continuous Integration popular and more organizations adding Continuous Delivery to the mix, they are continuing to become even more agile. However, without continuous testing, it’s entirely possible to accelerate the delivery of brittle code that doesn’t scale or perform well. “Today, you need to rethink your testing strategy to adapt to this new age. It involves running literally a hundred times more tests than you used to, and you need to do it in an automated fashion,” said Girmonsky. “You can’t use people; it needs to be machine-driven. On the other hand, the machines aren’t smart enough to actually generate the tests you need.” BlazeMeter combines the best of both worlds by enabling developers to define tests and their criteria so the tests can run automatically in the background. Better still, BlazeMeter helps software teams think differently about testing so they can continue to improve the effectiveness of their continuous processes. “Testing is not a one-time event. It’s continuous,” said Girmonsky. “Continuous Testing has to be part of the ecosystem because there are Jenkins and other systems in place that serve as a conveyor belt of the new IT.” DevOps teams use BlazeMeter to quickly and easily run open-source-based performance tests that ensure the delivery of high-performance software. As part of the continuous ecosystem, BlazeMeter can instantly test against any mobile app, website, or API, at any size or scale to validate performance at every stage of software development. The rapidly growing BlazeMeter community comprises more than 100,000 developers, many of whom work at global brands such as Adobe, Atlassian, The Gap, NBC Universal, Pfizer, and Walmart. “Load and performance testing should be part of every software delivery workflow, from legacy monolithic apps to continuously delivered microservices,” said Girmonsky. “We generate test traffic from cloud-based locations around the world, so your tests reflect real-world conditions and do not break in unpredictable ways.” Learn more at www.blazemeter.com . 2016-05-02 17:45 Lisa Morgan

29 Intel 'Kaby Lake' Core i7-7700K CPU details leaked in benchmark results Intel’s processor roadmap has looked vastly different as of late compared to say, several years ago. Gone is the normalcy, replaced with odd occurrences like Broadwell’s unusually short run before being replaced by Skylake. Nevertheless, Intel pushes forward with plans to launch its third processor family based on the 14-nanometer process in the not-too-distant future. Now, thanks to a leaked SiSoft Sandra benchmark, we have a pretty good idea of what to expect when the next flagship arrives. The Core i7-7700K is a quad-core processor (eight logical cores with HyperThreading) clocked at 3.6GHz (Turbo up to 4.2GHz) that packs 256KB of L2 cache and 8MB of L3 cache. In comparison, today’s Core i7-6700K is clocked at 4GHz with a max Turbo frequency of 4.2GHz. As for the integrated graphics, the chip packs 24 execution units with Sandra showing a clock speed of 1,150MHz. Scores from the runs can be found by clicking here. Like its predecessor, the i7-7700K will support the LGA1151 package and will be compatible with current motherboards. Kaby Lake will also add native USB 3.1 support, native HDCP 2.2 support, full fixed-function HEVC main10 and VP9 10-bit hardware decoding. Or in other words, users can expect slightly more efficient versions of Skylake with a handful of new features. As always, keep in mind that these results have not been confirmed. If legit, they could have been run on an engineering sample which may or may not reflect what the final consumer hardware will look like. A combination of the complexity involved in die shrinks mixed with less urgency (AMD isn’t exactly a threat these days) plus a cooling PC market has ultimately led us to where we are today – an uninspired desktop CPU market. Improvements are still coming but they’re at a rate that’s slower than before and much less impressive. That said, if you’re running an Intel chip that’s even a few generations old, you’d probably be best served to wait until the 10-nanometer Cannonlake chips arrive next year versus upgrading to Kaby Lake. Hell, I’m still running a Core i5-2500k (non-overclocked) that’s more than five years old at this point. Combined with 16GB of RAM and a solid state drive, it rarely shows its age (it also helps that I'm not much of a PC gamer these days). With the right chipset (I'm on an H67- based motherboard), I could easily run the chip at 4GHz 24/7 without breaking a sweat. It defaults at 3.3GHz and has seen 4.4GHz on a few occasions many years ago. Outside of Cannonlake, hardware enthusiasts are keeping a close eye on developments surrounding AMD’s next microarchitecture. Codenamed Zen , AMD’s forthcoming 14-nanometer offering was designed from the ground up and is thought by some to be stout enough to once again compete with Intel’s dominant Core family. Only time will tell if that prophecy pans out. AMD is expected to release its new flagship in October 2016. 2016-05-02 17:45 Shawn Knight

30 Industry Spotlight: Beautiful mobile apps happen when design and development are equals Is your enterprise prepared to surf the wave of enterprise omnichannel software development that’s now on the horizon? The surge in mobile demand comes thanks to a vast and expanding universe of devices, from tablets to smartphones to wearables and sensors, and the saturation of consumer app markets. Gartner forecasts that over the next two years, most IT organizations will struggle to release software at even a fifth of the pace of accelerating demand. Without an omnichannel app strategy in place, most enterprises will fumble and fail in their attempts to release high-performing apps with consumer-quality user experiences. User experience comes first Mobile applications have an extremely high abandonment rate. Get the user experience wrong, and you’ll often never get that user back. While enterprise omnichannel apps may have been able to mandate employee usage in the past, today’s bring-your-own-device environment and the pace of innovation makes having secure, efficient and delightful apps a requirement for success. While developers with JavaScript or native skill sets can code rich interaction flows, those requirements usually come earlier in the process, from designers responding to stakeholders. In the fast-paced mobile world, how do you get more iterative? For most designers, a native app is still a black box, so many mobile efforts all too often fall back on legacy processes from the Web 1.0: creating wire frames, style guides and images, and submitting them to development. “You build all this in a very waterfall process and it’s all very throwaway. The whole cycle is extended to weeks or months, and it’s never going to match the requirements—and the developer has to struggle to reflect the designer’s intentions. Meanwhile, you have no users. You don’t even know if this is what they want,” said Ed Gross, Orlando-based Vice President, Product Management for Austin and Hyderabad-based Kony Solutions. Tools for mobile flourish The design/development dichotomy began to change with Web 2.0. New web design and prototyping started to emerge, including Axure, Balsamiq, Dreamweaver, Omigraffle, Macaw and many more. These offered a better workflow for Web design and development, opening up the capability to create rich prototypes and templates for developers. Today, a similar set of tooling choices are beginning to flourish around mobile. It’s likely that enterprise app needs will push the industry toward a more mature set of truly cross-platform processes for app development. One such solution is Kony Visualizer 7.0, the visual application design and development solution for Kony’s Mobility Platform for phone, tablet, Web, or connected devices. The new release, which enables cross-platform mobile app development, also supports voice input for Apple Watch apps or intelligent control of smart homes using new Internet of Things (IoT) capabilities. Underlying native cross-platform APIs lets developers focus on improving omni-channel app productivity and life cyclse without needing prohibitively deep development skill sets for each technology. “Legacy tool sets give designers access to bare metal. Others, like , provide all the APIs for C# developers. What we do is different: It’s truly cross-platform, providing the simplicity of Javascript development and design tools,” said Gross. Achieving a true code-design balance in a single rapid prototyping platform means that developers can build apps using either low- code visual techniques or drop into full JavaScript development. Meanwhile, with unique tools to share native prototypes and app designs with real-time app previews and annotation support means that stakeholders can give feedback on the apps as they progress. Doing the front-end work While many mobile tools are developer-oriented, not designer oriented, many developers don’t want to do the front-end work. That’s what’s unique about Kony’s Visualizer value proposition: Designers can hook into all the underlying drawing SDKs and implement the front end in concert with developers or prior to handing it off them. With the Masters framework, application UX and logical assets can be reused and instantiated across phone, tablet, and desktop, providing a significant boost in design and development productivity. New in Visualizer 7.0, an extension for Photoshop CC, converts all of the layers, along with layer styles, to a Visualizer project, saving time, and rendering bulky style guides and documentation obsolete. Democratizing app development is key With so much at stake and so many form and usability factors to consider in mobile/wearable/sensor/tablet app development, enterprises can ill afford to isolate omnichannel development teams from designers, users and stakeholders. Rather, now is an opportunity to develop omnichannel centers of excellence as apps are launched, tested and improved in rapid iteration. “I like to home in on how Visualizer is better connecting the business and developers: Getting use cases out, selling them to the business, for all devices—mobile, phone, tablet, desktop— without relying on development having to redo the design. They have all the front end of that app done for them,” said Gross. As an example, one customer, a major Asian airline, used Kony to develop all its mobile in-flight customer applications. They were able to get apps into production quickly, then test them for usage patterns before iterating again. That continuous improvement process won’t happen if sub-par applications become the norm. A fan of the book City in Mind: Notes on the Urban Condition by James Howard Kunstler, Gross thinks there’s a parallel to today’s omnichannel app development imperative. Kunstler examines diverse cities from classical Rome and Napoleonic Paris to the tangled sprawl of Las Vegas. “The pedestrian nightmares of some cities are the unintended consequences of a top-down designer working without an end user in mind,” according to Gross. In contrast, “The Holy Grail of app development would be to enable upstream folks in the process—designers, business owners, and business analysts—to be empowered to participate in the design and development process. I use our tools every day. The more we can collaborate and share the same technology platform, the better and more efficient the whole process becomes. Getting the business process in front of user as early on as possible is so important. That’s going to have a huge impact on creating centers of excellence around omnichannel application development,” Gross said. Five tips for enterprise omnichannel development Enterprise mobile and wearable apps face a slew of different requirements beyond simple usability, from user privacy and security to data collection and integration. Here’s what to consider in an omnichannel development platform: 1. Middleware hooks. For enterprises, no omnichannel app is an island: It’s critical to connect mobile apps to middleware fabric, including mobile backend as a service (MBaaS) if needed. 2. Microservices architectures. An app services model connects to the front end with javascript or another language and provides access to back end data. 3. Up-front API management. Creating APIs is another key consideration as part of front-end design activities. 4. Device-specific functionality. New devices and native UI elements should be easily accessible. Visualizer lets teams visually build Apple Watch apps, use new iOS device APIs like WatchKit and HealthKit and work with the Windows 10 Universal Windows Platform. 5. Security. Finally, enterprise software development governance requires reliable security protocols and technologies to protect source code and business logic, obfuscate control flow and use appropriate cryptography to block static and runtime attacks. 2016-05-02 17:39 Alexandra Weber

31 Chipmaker Marvell appoints Richard Hill chairman May 2 (Reuters) - Chipmaker Marvell Technology Group Ltd has appointed Richard Hill its chairman, as part of an agreement it reached with activist hedge fund Starboard Value LP last week. Marvell had reached an agreement with Starboard to add to its board three independent directors nominated by the hedge fund. Hill, whose appointment came into effect on Sunday, is also among the four independent directors that Yahoo Inc agreed to add to its board last week under pressure from Starboard. He is also on the board of software maker Autodesk Inc . Starboard, which has been agitating for changes at Marvell since early this year, has a 6.5 percent stake in the chipmaker. Peter Feld and Oleg Khaykin are the other two directors that Marvell added to its board. (Reporting by Kshitiz Goliya in Bengaluru; Editing by Kirti Pandey) 2016-05-02 17:23 CNBC

32 Nvidia settles patent dispute with Samsung ahead of ITC ruling Nvidia and Samsung have agreed to settle all pending intellectual property litigation between the two in U. S. district courts, the U. S. International Trade Commission and the U. S. Patent Office. The move came just hours before the ITC was slated to decide whether it would ban Nvidia products from sale in the US. According to a joint statement released today, the companies agreed to a license a “small number of patents by each company to the other," but there will be no broad cross-licensing of patents or other compensation. Nvidia originally filed suit against Samsung and Qualcomm in September 2014, accusing them of ripping off its graphics technology for their smartphone chips. But things sort of back fired for the company, as Cnet notes. Instead of collecting patent royalties from smartphones and tablets with Samsung and Qualcomm chips, it had an an ITC administrative law judge rule against it and invalidate one of Nvidia's three patents because the technology had already been covered in previously known patents. Samsung countered with a suit of their own, claiming Nvidia had infringed upon patents related to some of the basic circuit designs that saves costs for the manufacturers, and enables better video performance. In December the ITC upheld its initial ruling and said that Nvidia actually infringed on Samsung's patents. As a result of today’s agreement Nvidia will also be dropping its case against Qualcomm. 2016-05-02 17:00 Jose Vilches

33 'Uncharted 4' multiplayer maps and modes will be free, all other paid content can be earned in-game Uncharted 4: A Thief’s End is set to drop on May 10 exclusively for the PlayStation 4. Like most modern games, it’ll be supported by DLC post-release although in a rather rare twist, developer Naughty Dog revealed on Monday that all future DLC maps and modes will be free of charge. In a post on the PlayStation blog, lead game designer Robert Cogburn said the approach represents an entirely new direction for the studio. The main reason Naughty Dog wanted to release all of the maps and modes for free was to preserve the multiplayer community. Not all players purchase map packs which leads to fragmentation of the install base. As more paid map packs are released, the fragmentation only widens. By giving all of the new maps and modes away for free, Naughty Dog is ensuring this won’t become a reality. The hope is that the community will stick with the game for the long haul. Offering multiplayer maps and modes for free isn’t an entirely new idea but it’s still pretty rare. Gears of War 4 , which lands on the One in October, will offer multiplayer maps for free albeit on a unique rotating basis. As GameStop notes , the only way to guarantee you can play a specific map whenever you want is to buy it. 2016-05-02 16:15 Shawn Knight

34 The next Battlefield game will be unveiled this Friday The Battlefield versus Call of Duty rivalry has existed for many years. This week, the franchises are competing yet again, as both first-person shooters reveal details about their next installments. We know that the upcoming Call of Duty, called Infinite Warfare , will arrive on November 4. And now an event timer has appeared on the Battlefield website that is counting down to 4 PM EST on Friday, May 6, when we'll get to see the first military-based Battlefield game in three years. The new title will be unveiled during a “world premier event," followed by a live Twitch stream on the BF channel , where fans will get “a first look at the future of Battlefield.” “Get insight into the minds of the developers as you hear from the creative director Lars Gustavsson and lead producer Aleksander Grondal on the past, present and the future of Battlefield,” wrote EA in a press release. In December, Dice developmental director Dan Vaderlind Tweeted that now Dice had shipped Star Wars: Battlefront, part of the team would be moving onto the next Battlefield game. We still don’t know anything about the upcoming Battlefield; it could be Battlefield 5, Bad Company 3, or perhaps even follow Call of Duty’s lead and re-introduce some futuristic weaponry to the series by returning to the Battlefield 2142 spin-off. After last year’s cops-and-robbers-based Battlefield: Hardline wasn't greeted with universal critical acclaim, EA CFO Blake Jorgensen said the series will return to its military-style roots, and that the next game will be a “fun, new Battlefield.” There have been rumors that the upcoming title will take place during the first World War. The series started with , so setting the game in the early part of the twentieth century may not be beyond the realms of possibility. We’ll find out what EA has in store for players this Friday. 2016-05-02 15:30 Rob Thubron

35 Micro Focus announces completion of Serena Software acquisition As a way to increase its DevOps capabilities, Micro Focus completed its US$540 million acquisition of Serena today. “Our customers continue to look at DevOps as a way to deploy critical applications and services quickly and with greater reliability to meet business demands,” said Stephen Murdoch, CEO of Micro Focus. “The Serena acquisition extends our ability to help customers meet these challenges so they can drive greater innovation faster with lower risk.” (Related: How Micro Focus acquired Serena ) The companies will design and build business apps and services with better reliability, and to deploy business apps on a wider variety of platforms. They will also aim to improve the speed and efficiency of new business services through automated release and deployment solutions, according to Micro Focus. Micro Focus’ portfolio of ALM solutions span mainframe environments, distributed systems and the cloud, and Serena will add capabilities in software development, software configuration and change management, according to Micro Focus. 2016-05-02 15:07 Madison Moore

36 Nvidia's 365.10 drivers are optimized for Battleborn, Overwatch, Paragon and Forza 6: Apex Nvidia has a new set of Game Ready drivers out today. The GeForce Game Ready 365.10 WHQL drivers are said to be optimized for several new and upcoming games including Forza Motorsport 6: Apex . The drivers include optimizations for Battleborn, the hero shooter from Gearbox Software that mixes first-person shooting with MOBA-style gameplay that launches tomorrow as well as Overwatch , Blizzard’s first original franchise in years. The open beta for Overwatch gets started later this week. Nvidia’s latest drivers will also deliver the best results in Paragon, Epic Games’ MOBA that’s currently in closed beta. The Unreal Engine 4 - powered title will be available through an open beta starting this weekend. Turn 10’s Forza Motorsport 6: Apex may be the most anticipated title of all for the simple fact that it’s the first Forza game to come to the PC. The beta races onto the scene May 5 and with the latest drivers, you should be all set for the best possible experience, assuming of course that your hardware is up to snuff. It was revealed last week that you’ll need at least an Nvidia GeForce GT 740 (or an AMD Radeon R7 250X) to play although the recommended specs call for a GeForce 970 or R9 290X. Want to game at 4K 60 FPS? You’ll want a GTX 980Ti or Radeon Fury X, 16GB of RAM and a Core i7-6700K processor (plus 30GB of solid state drive storage) or better. 2016-05-02 14:45 Shawn Knight

37 C#/XAML for HTML5 beta 8 released The free Visual Studio extension that allows developers to build HTML5 applications using C# or XAML has hit beta version 8 on its way to general availability. C#/XAML for HTML5 (CSHTML5) was developed by the software company Userware. Beta 8 includes more than 25 features requested by users, such as extensibility, styles and templates, and C#/JavaScript interoperability. “[CSHTML5] is the only extension that lets developers migrate their Silverlight apps to HTML5 without changing much of their code,” said Giovanni Albani, CEO of Userware. In addition, the extension allows developers to create enterprise-grade Web and mobile apps in C#/XAML without having any JavaScript knowledge, he added. (Related: Microsoft tackles UI development for XAML ) One of the biggest features of the latest release is the ability to use third-party components and add-ons to extend CSHTML5’s functionality, according to Albani. Extensions include WebSockets, Print, File Open Dialog, File Save, Mapping Control, ZIP Compression and more. Developers also have the option to create their own extensions. Any extensions submitted to the company before June 15 using CSHTML5’s Professional Edition will be entered to win a free Professional Edition perpetual license and one year of free updates. The solution’s new styles and templates functionality allows developers to customize their built-in controls using XAML’s advanced styling features. Developers can also go beyond C# and XAML with the new ability to write native JavaScript code and access existing JavaScript libraries. “For example, when migrating Silverlight apps that use third-party components, developers can replace those third-party component with similar JavaScript-based libraries,” Albani said. “Other extensions exist that compile C# to JavaScript—such as Bridge. NET, Saltarelle and SharpKit—but none of those support the XAML language to build the user interface.” CSHTML5 is expected to be released from beta in August or September of 2016. Developers can expect the next version to include expanded support for Silverlight and WPF features such as ChildWindow, animations, implicit styles, improved compilation performance, enhanced Chart controls, and improved WCF. “For the last couple of years, we have been steadily releasing major new versions every eight to 12 weeks, and we will continue to do so in the foreseeable future,” said Albani. The full road map is available here . 2016-05-02 14:30 Christina Mulligan

38 U. S. uncovers $20 million H-1B fraud scheme The U. S. government has indicted a Virginia couple for running an H-1B visa-for-sale scheme the government said generated about $20 million. Raju Kosuri and Smriti Jharia of Ashburn, Va., along with four co-conspirators, were indicted last week by a federal grand jury in Alexandria, Va., according to the Department of Justice (DOJ). The scheme involved, in part, setting up a network of shell companies and the filing of H-1B visas applications for non-existent job vacancies. Workers were required to pay their own visa- processing fees and were treated as hourly contractors, the DOJ alleged. Treating H-1B workers as hourly contractors is in violation of the program rules, the government said. More than 800 H-1B visa petitions were submitted over a period of nearly 15 years, according to court documents. The six people indicted in the case face prison time of anywhere from 10 to 30 years if convicted. Neither Kosuri or Jharia could not be reached immediately for comment. The H-1B program may be susceptible to fraud. In 2008, U. S. Citizenship and Immigration Service reported that a review of 246 randomly selected petitions filed in 2005 and 2006 revealed a fraud rate of just over 13 percent. The government's analysis found forged documents, fake degrees, and shell companies with fake locations. Jail time is an ongoing risk for people convicted of H-1B fraud, although it's difficult to know how many have actually been sent to prison for it. One H-1B fraud case that may involve a prison sentence is pending in Texas. A U. S. District Court judge in Dallas is scheduled to consider sentencing, as early as this week, for brothers Atul Nanda and Jiten "Jay" Nanda, for visa fraud following a jury verdict last November. They face up to 20 years in prison for using the visa program to create an on-demand workforce, the government alleged. 2016-05-02 14:28 Patrick Thibodeau

39 Amazon bolsters voice-based platform Alexa with investment in TrackR Amazon.com is investing between $250,000 and $500,000 in Bluetooth technology company TrackR to extend the reach of its Alexa virtual assistant, according to a source familiar with the matter. Alexa is the cloud-based system that controls the Amazon Echo, a speaker system launched by Amazon in 2014 that has emerged as a surprise hit. "Alexa" is the name the device responds to when users make requests, such as "turn on radio. " Amazon and TrackR declined to comment on the size of the investment. Like Apple 's Siri and Google 's Google Now, Alexa is designed to answer questions or take other actions in response to simple voice queries. Unlike its rivals, Amazon allows non-Amazon devices to integrate Alexa technology. The investment in TrackR came through Amazon's $100 million "Alexa Fund," which invests in and supports technologies that broaden Alexa's abilities. Santa Barbara, California-based TrackR uses Bluetooth technology to help track lost items. Users put a small chip on an item, such as a wallet or TV remote, and can order those products to make a sound through their phone so that they can be found. If a TrackR customer loses an item out of Bluetooth reach, any TrackR user can connect to the device using the company's network to alert the owner of the lost item. The Alexa partnership will give the TrackR service a voice response capability and will also integrate in the other direction and enable people to find their lost items via the Echo. "The ability to bring on more partners and realize that you are building an entire ecosystem—I think that is what was really important for us," said Chris Herbert, who co-founded TrackR with friend Christian Smith in 2009. TrackR raised $8.7 million last year in a Series A round led by Foundry Group. Amazon has made roughly 15 investments so far through the Alexa Fund, including The Orange Chef, which helps connect kitchen prep devices, and Garageio, which makes a connected garage door opener. 2016-05-02 13:40 CNBC

40 New service helps small businesses sync and share files Enterprises of all sizes have become increasingly reliant on file syncing and sharing services. But for smaller companies business focused services can be expensive, leaving them reliant on free consumer services that offer limited space and functions. Backup and storage specialist Datto is launching a new inexpensive yet powerful file sync and share (FSS) service leveraging the low cost basis of the company's 200 petabyte (PB) private cloud, coupled with a global license agreement with ownCloud , an established open-source leader in the FSS industry. It allows employees to access files from any device in any location and share them with others inside and outside the company. The result is a significantly superior value proposition for Datto's partners and their customers making Datto Drive more affordable than other FSS offerings in the market. To launch the service the company is also offering 1TB of storage free for a year to the first million customers. After that it will cost $10 per month for 1TB of storage per organisation, with unlimited users. "Current file sync and share services are overpriced solutions for small businesses," says Austin McChord, CEO and founder of Datto. "Anyone who's paying hundreds or thousands of dollars every month for FSS will immediately see Datto Drive as a welcome alternative. Priced at just $10 per terabyte per month for an unlimited number of users in an organization, Datto Drive is just one of many ways we're empowering our partners to help small businesses run more efficiently and cost effectively. Better yet, we're going to give away the first one million instances for Datto Drive for the first year for free". Small businesses can find out more and sign up to receive a free account on the Datto website . Photo Credit: Inq / Shutterstock 2016-05-02 13:04 By Ian

41 Get 50% off Scrivener for Windows via Deals Today on offer via our Desktop Software section of Neowin Deals, you can save 50% off Scrivener for Windows , the award-winning writing app used by many New York Times best- selling authors. Ever tried writing a novel in Microsoft Word? Trust us, you don’t want to. That’s why writing professionals around the world use Scrivener, the word processor and project management tool that stays with you from your first, unformed idea all the way through to the final draft. As you’re writing, outline your ideas, take notes, and view research all at once. Scrivener takes all the tools you have scattered around your desk and makes them available in one application. Scrivener for Windows normally retails at $40 , but you can pick it up for just $20 for a limited time. In addition, if you refer this deal via social media (below the 'Add to cart' button) which results in a purchase, you'll get $10 credit added to your Neowin Deals store account. Get this deal or learn more about it | View more offers by Literature + Latte That's OK. If this offer doesn't interest you, why not check out our giveaways on the Neowin Deals web site? There's also a bunch of freebies you can grab here , as well as other great tech-related deals. You could also try your luck on the The Lenovo & Turtle Beach Headset Gamer Giveaway , all you have to do is sign up here to enter for this $1,279 value giveaway! How can I disable these posts? Click here . Disclosure : This is a StackCommerce deal or giveaway in partnership with Neowin; an account at StackCommerce is required to participate in any deals or giveaways. For a full description of StackCommerce's privacy guidelines, go here. 2016-05-02 13:04 Steven Parker

42 How to change your MAC address in Windows 10 Every network adapter has a MAC address, a unique value used to identify devices at the physical network layer. Normally this address stays the same forever, which may allow networks to recognize and track you. This isn’t always a bad thing -- a network could use a MAC address to allow device access without authentication -- but if you’re concerned, most MAC addresses can be changed in a few seconds. Windows 10 comes with MAC randomization built in. Click the network icon in your taskbar, then select Network Settings to begin. Click the connection you’d like to change, then scroll down and hit "Manage Wi-Fi Settings". Select "Use random hardware addresses" to turn it on, and your system should now use a different MAC address every time you connect to a new network. There are several reasons this might not work as advertised. If your driver doesn’t support it, for instance, or some other network software has taken control, it’s possible the option will be grayed out. Another complication is that Windows 10 always uses the same MAC address when connecting to the same network. That is, the system generates a random address for your first connection, but then reuses that for future connections. (Mathy Vanhoef’s blog discusses the details here .) If you need some networks to recognize you then this might not be a problem, but if you prefer to go completely random -- or the option is grayed out, or you’re not using Windows 10 at all -- then it’s best to switch to a specialist MAC-changing tool. There’s plenty of choice around, but Technitium MAC Address Changer works well for us -- it runs on XP-10, makes it easy to identify network connections, sets and restores MAC addresses in a click or two, is ultra configurable and has a handy network monitor thrown in. Technitium MAC Address Changer is a freeware application for Windows XP and later. 2016-05-02 12:32 By Mike

43 Australian Parliament considers implementing electronic voting for MPs Since its founding in 1901, the Australian Parliament has observed a number of traditions. Some traditions hark back to those enacted in British Parliament such as making the declaration of an open Parliament in the Senate rather than the House of Representatives. This tradition primarily exists because a monarch or an appointed representative, such as the Governor-General in Australia, does not enter the House of Representatives. However, the way in which MPs formally vote in the future could change if a Lower House committee gets its way. Normally, when a voice vote has been challenged, MPs must physically move themselves to either the right or left side of the Chamber to represent a respective affirmative or negative vote. Votes are then manually counted and names of Members recorded before the result is announced. Committee MPs have been keen to retain this tradition but with the introduction of smart cards which would be swiped across a reader physically located either side of the Chamber. This would enable votes to be tallied in real time and potentially save time required for manual counting and recording. However, a major drawback to a smart card system would be that votes for absent MPs could potentially be cast if their presence in the Chamber was not verified. In any case, this is not the first time that electronic voting has been put forward. Back in November 1996, the Procedure Committee made a similar recommendation for electronic voting with nothing to show for it. Originally, the cost to implement electronic voting in both Houses originally estimated at AU$3 million over three years while support costs topped AU$300,000 per year. Given the elapsed time of almost two decades, it would be interesting to see how much a solution would cost today given the advancement of technology since the 1990s. Source: ABC News Australia | Australian flag image via Shutterstock 2016-05-02 10:08 Boyd Chan

44 44 Facebook Messenger to gain privacy-enhancing self- destructing messages With the ongoing debate about privacy and encryption, the rollout of end-to-end encryption to Facebook-owned WhatsApp came as little surprise. Now Facebook Messenger is set to gain a couple of privacy-enhancing features including self-destructing messages. Already found in other messaging tools such as SnapChat and Telegram, self-destructing messages have been unearthed in Messenger for iOS version 68.0. As you would expect, the feature makes it possible to place a time limit on how long messages are visible for, making it ideal for communicating sensitive information. SEE ALSO: WhatsApp's end-to-end encryption is not all it's cracked up to be As reported by VentureBeat , images of the feature in action were shared on Twitter by @iOSAppChanges. Screenshots show that the life of a message can be set in minutes, hours or days, after which time all traces of it will be deleted. It's not clear when the feature will be rolling out to users, but considering the popularity of SnapChat, it is likely to prove popular when it does get a public airing. Analysis of the Facebook Messenger app code also reveals references to a 'secret chat' feature, but no further information is available about this at the moment. If you've been waiting for additional privacy options in Facebook Messenger, it seems you may not have much longer to wait. Just keep an eye open for updates in the coming days and weeks. Photo credit: Romolo Tavani / Shutterstock 2016-05-02 10:03 By Mark

45 Azure VMs with real GPUs will deliver a massive power boost Whether it's Photoshop filters, JavaScript in the browser or machine learning , increasingly GPUs are as important to computing performance as CPUs, but you haven't been able to take full advantage of that with virtualisation or in the cloud. will let you get full access to the GPU inside a virtual machine, using a hardware pass-through setting called Direct Device Assignment, and because Azure runs on Windows Server, you'll get it there as well. "We leverage this technology primarily for GPUs and also for things like NVMe storage," explained Microsoft's Chris Huybregts at the Nvidia GTC conference. You can only use the GPU with one virtual machine, but that virtual machine gets access to all the features of the GPU, using the standard graphics driver (rather than the virtual GPU driver that Microsoft supplies for RemoteFX GPU virtualisation). Direct Device Assignment is how the new N-Series VMs on Azure get their GPUs. Designed for running applications that need high-performance graphics or that use the GPU for high- performance parallel computing, they were announced last September and are currently in preview. When they launch there will be two ranges of N-Series VMs, both with a choice of 6, 12 or 24 Xeon E5 CPU cores and one, two or four GPUs. The NV series uses Nvidia Tesla M60 GPUs and is designed for running visualisation and rendering software, while the NC series uses Nvidia K80 GPUs and is for GPU computing. "That means the entire GPU will be available in the virtual machine; that includes CUDA, OpenGL, OpenCL and DirectX," Huybregts said. You can run Windows Server or Windows 10 in the VMs, or Linux. "We understand that the world needs to know they can run on Linux – Linux is a first-class citizen for Azure," he promised. There will be virtual machine images in the Azure Marketplace that are set up with applications ready to use (similar to the Azure Data Science virtual machines that bundle up useful tools for data science modelling like R Server and Python, for both Windows Server and Linux), or you can upload your own image, including the OS and applications you need. Huybregts wouldn't give any details on how Microsoft will use Grid, Nvidia's own graphics virtualisation technology – something Microsoft has mentioned previously for the N-Series VMs – but he did confirm there's work going on. "If you could see where we're going, you'd see we are working with Nvidia and the industry in general – but we're not talking about that today. " (Nvidia Grid is what AWS offers in its G2 GPU-compute VMs, which use older Nvidia K520 graphics cards and use the Grid K520 drivers, which have to be loaded specifically, making them a little harder to set up.) He also wouldn't talk about when the N-Series VMs will come out of preview on Azure, or how much they're likely to cost. They're unlikely to be available before Windows Server 2016 is released, which is expected around September 2016. For comparison, running the Data Science VMs on Azure costs from $0.67 to $9.95 an hour if you use the Xeon E5-based G-Series virtual machines, depending on how many cores you need. Prices for the A-Series VMs with Xeon E5 and high-performance InfiniBand networking start at $1.46 an hour – and AWS G2 virtual machines cost between $0.76 and $2.87 an hour. Expect N-Series prices to be closer to these ranges than the $0.02 an hour you'll pay for the cheapest A-Series virtual machines on Azure. 2016-05-02 09:10 Mary Branscombe

46 Going beyond the code: Things developers should care about Software has become a key factor in the success or failure of a business. Every industry is going digital, and with most of the world already “plugged in,” their software needs to be fast, easy to use, bug free and useful. Development shops are the powerhouses to creating this software, but be successful, a developer’s job has to involve more than just code. “Developers are responsible for taking clients’ wishes and making them come to life, taking users’ needs and coming up with a solution for them,” said Alyssa Nicoll, content developer at Code School. Because software touches so many facets of a user’s life, just looking at it from a technology perspective is a recipe for failure; there are so many other things developers have to look at to make sure they get it right, according to Marc Anderson, cofounder and president of Sympraxis Consulting. (Related: How evangelists get companies to engage users ) “It doesn’t work to have somebody toss you requirements and then you go away for six months to deliver something back,” he said. “It won’t be what they wanted, and it won’t be what they asked for.” On top of writing code, developers have to juggle testing code, meeting business requirements, keeping up with new technologies, and making users happy. Here is what they should focus on in order to help the business and themselves become successful. Their passions This is an interesting time for developers. There are more software development jobs than there are developers, so they have an advantage when it comes to the job market. “In general, there is a lot of prosperity within the industry in terms of low unemployment and a lot of career choices,” said John Basso, cofounder and CIO of Amadeus Consulting. “Developers have a lot more choice now, and so they can align their job with what their passions are.” According to Basso, the ability to align your passions with your career will not only help you become a better developer, but will also help you serve your business better and make the business more successful. “You get into this rhythm of doing specifically what either the client needs, your boss needs or your business needs, but sometimes you have to take a step back and ask why you are writing that code,” he said. “Then, once you figure out why, there might be an opportunity to really go in and not just write the specific code you are being asked to write in terms of solving a specific business problem, but figuring out a better way to solve the problem.” The business Once developers have a sense of the type of software they want to build, they need to figure out the kind of organization they are going to thrive in. “It always amazes me that somebody might interview for this large energy company as one interview, and later in the day interview for a smaller company,” said Basso. “One of those two companies might be a good fit for them, but it is very rare that a single individual would fit well into both of those companies.” According to Basso, a small startup may work in smaller teams, ask the developer to perform different jobs, and have their technology constantly changing. A larger, more established company might have developers doing one thing for a long time with larger teams. “It is not that one is good or bad, it is just that they are very different things,” he said. “If you take a person who is better suited to work at a large company, and put them in a small company, that is probably going to make that person miserable.” In addition to culture, developers should care about the mission of the business, and whether or not they agree with it. According to Andrew Phillips, vice president of DevOps strategy at XebiaLabs, developers should ask themselves if they care about what the company does, have any kind of passion involved with the software they will be building, or are they just there to build a shinier widget. “The real job of a software developer is to solve business problems with technology, not just build cool technology; but to understand the business problem and translate that into technology,” he said. There will be some situations where developers need to accept a job in order to make a paycheck, but at the end of the day they should remember why they got into software development in the first place. “As many jobs can be, when you develop for hours on end, it can become tedious,” said Code School’s Nicoll. “It is important to remember the people you are helping at times like that. When the error messages are screaming and Google has no answers, remember why you got into development. Remember the passion and excitement you once felt when creating something that actually worked. Remember the users whose lives you are improving while you are hacking away at a bug.” If you choose a job that you don’t believe in and is against your own values, it will break your work—even if it is unintentional, according to Sympraxis’ Anderson. User experience “User experience” is not just a buzz phrase: It is something that actually matters. A big part of developing software is getting to know your users, understanding what their pain points are, and just understanding what kind of people they are, according to Anderson. “The closer you can be to your users, the better product you are going to deliver,” he said. Users don’t care if developers are writing elegant code unless that code gives them an elegant thing they can appreciate, he explained. Developers have to understand what the user experience is from all perspectives, not just how it looks on the screen, but how a user interacts with it. “A computer that the user can’t figure out how to use is a very expensive paperweight,” said David Platt, developer and professor at Harvard Extension School. “And that experience needs to be self-explanatory. A UI is like a joke: If you have to explain it, it isn’t very good.” According to XebiaLabs’ Phillips, developers need to integrate user behavior measurements into their solutions to see how many clicks a user might make on a certain page, what paths they take through the solution, and where they get stuck. “This allows everyone to get better insight into how or whether users actually use features that we add to our applications, and whether they use them as we intended to use them,” he said. Team members Enterprise developers are no longer a one-man band. They have project managers, operations people, marketing executives and testers they need to work with every day. Ensuring that their relationships are healthy and communication is good between teams is vital to a happy and healthy developer, according to Code School’s Nicoll. “The people you work with will make or break your job satisfaction every day,” she said. It also helps to understand what other team members are doing in order to better interact with them and build better solutions. “There is a lot to be said about learning a little bit about what the other people on your team are doing,” said Amadeus’ Basso. “I have seen some pretty disastrous results in systems where people fail to acknowledge the other parts of the system that is being developed.” According to Basso, this makes developers better at their job because they understand how other parts work at an intimate level. “That makes them much more valuable as individuals designing the solution for the whole problem,” he said. Developers should also regard it as part of their job to pass their learning on, according to XebiaLabs’ Phillips. “Pass the knowledge on, mentor the people who come into the organization. Don’t just show them how the technical systems work; pass on some of the knowledge about the business you are in,” he said. The architecture Developers should take on the role of an architect when building their applications, according to Bob German, principal architect at BlueMetal. “Architects are developers who take the big picture into account and try to balance all the tradeoffs and considerations,” he said. According to German, building an application is a lot like building a house. Architects have to look at how many people are going to live in the house, how much snow the house would have to hold in the worst possible winter, what necessary plumbing needs to be in place, and any extensions that may be added on later down the road. Developers have to look at the technology and how that technology is going to change over the lifetime of a product, the scalability that is going to be required, and features that are going to be later added on. Fast iterations While there are no silver bullets experts can offer in terms of methodologies, developers should make sure they are working in fast iterations in order to reach quick and valuable results, according to Sympraxis’ Anderson. “Everyone needs agile,” he said. “We need to understand sometimes we are going to go in the wrong direction, and we need to course-correct. We need to be able to react to the changes and realize the priorities for what we are actually building along the way.” Fast iterations with short cycles ensure developers can fail, but fail fast. “If you are going to fail, fail quickly, learn from it and restart,” said German. “That is one of the nice things about the agile methodology: If you break it down into short cycles, you don’t have a setback for more than that amount of time. You aren’t going to find that you were going in the wrong direction after a year of development.” In addition to agile, developers should promote continuous test and continuous deployment cycles, and release their solution every couple of weeks even if it is just an internal one and doesn’t go out to end users, according to German. “The whole testing and release cycle every couple of weeks is extremely valuable to identifying any issues upfront, and to also be able to get feedback on your work continuously so that you can keep up with the velocity of change,” he said. It is also important that teams pick a development approach and stick with it. According to Amadeus’ Basso, mixing methodologies within one solution doesn’t work well. Different methodologies might make individual lives easier, but as a whole it is going to make developing the solution even more complicated, he explained. Learning new things There is no comfort in the industry; developers can’t just learn one thing and then sit around doing it for the rest of their life, according to Anderson. Technology is changing drastically from year to year, and in order to keep up with the change, developers have to be continually learning. “If we want to keep good at what we do, we have to be actively seeking out new knowledge,” he said. According to Anderson, it is not the employer’s responsibility to provide training for developers. The really successful developers are the ones who don’t know everything, but know how to find out and learn more about things. “If you are going to get into the software business, you better be a person who likes to learn and wants to learn throughout their career,” said BlueMetal’s German. “You are never done with learning and training in the software business.” With the Internet being so accessible today, developers can keep track of the evolution of technology by following blogs or joining several user groups where they can ask questions and learn more about a technology from other people who have used them, according to Amadeus’ Basso. “You have to keep up with the change. One way is you have to self-educate, and then you have to experiment, try new things, and learn from them,” he said. Types of developers Not everything developers should care about is going to be of equal importance to every type of developers. According to German, the type of developer you are will determine what is important to you. Citizen developers are power users or people who are mainly business focused, but do a bit of development. “I love these people because they are the closest to the business and able to understand the dynamics of the actual usage of the software better than any other developer,” said German. Enterprise developers are building software for a company or enterprise. “They are likely to deal with the citizen developers, who are sort of their business power users,” said German. Independent software vendor developers are building package software to sell. “They have to pay attention to extra things because their software might be installed on hundreds of different kinds of devices, and they have to be careful not to do anything that will break it,” German said. Cloud developers are building services for the cloud, where they might have a million users hitting their software. “The level of scale they have to think about is beyond what any other developer has to think about, and security is more important because they are on the Internet and probably going to get more people trying to hack their software,” German said. “As you go up that continuum, the amount of attention you have to pay to things like security and scalability get even more important.” John Basso, cofounder and CIO of Amadeus Consulting, believes as developers start in their career, their area of concern expands as skills advance. “Initially, you are solving a very specific business program, say letting a user log in to a system. As you get more mature and your skills get better, your area of concern expands,” he said. “For example, instead of just logging in, you might say, ‘Well how are we going to detect fraud with this login? Should we be capturing the IP address or other information?’ And then you start saying, ‘Well it is not just a login, we have to put in these other security measures in.’” Developers are really just solving the same problem, Basso said, but as they mature, they have to look wider and solve the larger problem. 2016-05-02 09:00 Christina Mulligan

47 Bing iOS app update allows for searching via image Microsoft’s search engine, Bing has received an update of its iOS app which allows users to search via an image. iOS users who install the upgrade can search the web using photos taken with the app or images already existing on their device. The feature is not yet included in the Android or Windows Phone versions of the app. The addition enables images to be cropped, so that similar images can be found. The update then permits users to provide feedback on the results to improve the app’s functionality. Other inclusions to Version 6.5.1 include receiving a notification as soon as selected films become available on streaming platforms and the ability to access bus routes and schedules directly from the maps section of the app. Source: iTunes App Store via VentureBeat 2016-05-02 08:12 Matthew Sims

48 FBI Hacking Authority Expanded By Supreme Court On Thursday, the US Supreme Court presented Congress with changes in the Rules of Criminal Procedure that will allow judges to issue warrants directed at electronic devices outside their jurisdiction. These changes vastly expand the government's surveillance and hacking power. Magistrate judges are mostly limited to authorizing search and seizures in their jurisdiction. The changes to Rule 41 , requested by the Department of Justice and endorsed by the Supreme Court justices, give judges the ability "to issue a warrant to use remote access to search electronic storage media and to seize or copy electronically stored information located within or outside that district" if the information sought has been "concealed through technological means" or the device has been damaged without authorization and is held in five or more districts. Compromised computers are considered "damaged" for the purpose of these rules. This definition allows investigators to infiltrate botnets anywhere in the world using a warrant. But the Center for Democracy and Technology (CDT) contends the rule is overly broad because about 30% of the world's computers could be considered "damaged" -- infected with malware -- and thus could be subject to Rule 41 searches. Among other objections to the changed rules, the CDT has argued that authorizing a search warrant for an unknown location violates the Fourth Amendment requirement that warrants should "[describe] the place to be searched, and the persons or things to be seized. " David Bitkower, Principal Deputy Assistant Attorney General, has defended the constitutionality of the rule change and the utility of adopting the change as a way to deal with anonymization technology. In a Dec. 22, 2014 letter to Judge Reena Raggi, chair of the Advisory Committee on Criminal Rules, he wrote the issue is "whether [search warrants using certain remote search techniques] should as a practical matter be precluded in cases involving anonymizing technology due to lack of a clearly authorized venue to consider warrant applications. " The government use of Network Investigative Techniques, or hacking, as seen in the FBI's recent effort to break into an iPhone used by one of the San Bernardino shooters, has attracted the attention of lawmakers. In June last year, Senate Judiciary Committee chairman Chuck Grassley (R-IA), wrote a letter to FBI Director James Comey inquiring about government-authorized hacking. "Obviously, the use of such capabilities by the government can raise serious privacy concerns," he wrote, asking for details about FBI policies and procedures when the agency employs spyware. Among the questions he asked was which companies the FBI has impersonated when trying to install spyware through phishing and whether the agency has informed those companies. [Read Email Privacy Act Wins Sweeping Approval in House .] US Sen. Ron Wyden (D-OR) issued a statement on Thursday asking members of Congress to reject the government's expanded hacking and surveillance powers. Such significant rule changes should be addressed by Congress, he said. "Under the proposed rules, the government would now be able to obtain a single warrant to access and search thousands or millions of computers at once; and the vast majority of the affected computers would belong to the victims, not the perpetrators, of a cybercrime," said Wyden. "These are complex issues involving privacy, digital security and our Fourth Amendment rights, which require thoughtful debate and public vetting. " The rule change will take effect on Dec. 1, 2016 unless Congress takes action to alter the rules. Wyden said he plans to introduce legislation to reverse the rule amendments soon. 2016-05-02 08:06 Thomas Claburn

49 49 Windows Store Weekly: Facebook delivers on its promise for Windows 10 Windows Store Weekly is a weekly round-up of what's been going on in the world of Windows apps, from the most prominent and anticipated, to the bolted and patched, and the fresh and promising, while also scooping up leaks, both official and unofficial. Another week has passed, and - sure enough – there’s a lot to be excited about. From Facebook’s apps landing on Windows 10, to news of a potential way for Windows RT devices to run Windows 10 Mobile – and therefore Windows 10 apps – this edition of Windows Store Weekly is a blast, so let’s get started. If you are a Fast Ring Windows Insider, you have access to a new pre-release version of Groove Music that was made available a few days ago. The update brings a number of improvements to the user experience, from faster sign in, to more reliable background playback, to accessibility improvements and better app telemetry. As is expected for anything in the Fast Ring, there are some issues that you should be aware of: you might have to right-click on a track in the Now Playing view in order to expose the usual controls, and the app might not recognize your subscription status (you have to restart the app until it does). Furthermore, if you’re running build 14332 , you have to wait at least two minutes after every login on PC if you want Groove Music to work, and playing DRM-protected content won’t work if you previously accessed content from Groove Music, Movies & TV, Netflix, Amazon Instant Video or Hulu. If you’re using the OneNote app in Windows 10, you may have noticed that you can now sign in with your Office 365 organization ID. Not only that, but you can now record audio notes with OneNote on Windows phones, an overdue feature that many users will certainly appreciate. For those of you who live in Russia and want to use the Yandex search engine on your Windows phone, the app has recently been updated with a number of tweaks that bring the UI more in line with the Windows 10 design guidelines, and the new home page will give you a quick overview of “the weather, the city’s current traffic status and today’s exchange rates”. Windows users may not have an official Reddit app in the Windows Store, but there are several third-party alternatives, and Readit is certainly one of the best available so far. An update (version 4.6.6.0) for it was released a few days ago, and it includes a number of improvements to the user experience: As always, there have been plenty of ‘placebo updates’ this week in the form of bug fixes and stability improvements for several apps, including: Perhaps the biggest news this week is that Windows 10 users were finally treated with official Facebook, Messenger, and Instagram apps. It’s worth noting that the company has used its own Osmeta tools to port the apps from iOS, and that Facebook and Messenger are only available on PC for now, with mobile versions slated to arrive later this year ( Instagram is mobile-only). FOX Sports Go received the UWP treatment this week , and it has been well received so far - a good sign when you consider that the app has an overwhelming number of negative reviews for the previous version. The new app is ad-supported, features an improved UI, better video quality, and offers live coverage of NBA, UFC, NASCAR, and more – with the mention that NFL streaming is not available on phones. Some of you are probably familiar with Foobar2000 , a free Win32 music player that hides advanced features under a minimalistic UI. The developer behind Foobar2000 is working on a Windows 10 app, and you can already access a “free preview version”. It is pretty bare bones at the moment, and the UI could certainly use some more work, but if you’re one of the backers of the foobar2000 Mobile project , you’ll be glad to know that one of the goals is finally becoming a reality. Another addition to the UWP app family is SendPro , an app that should make it easier for you to manage and monitor shipments from various providers such as the U. S. Postal Service, FedEx and UPS. Pitney Bowes (the company behind the service with the same name) calls it a “one-of-a-kind multi-carrier office shipping solution”, one that helps you to choose the right carrier for your shipping needs. It also features Cortana integration, and is a Windows Store exclusive, which is good news for the UWP ecosystem. For those of you that want an app equivalent for Notepad, Nextpad is a new app in the Windows Store that aims to be a modern replacement. It does everything Notepad does, has support for “dark mode”, and uses speech recognition in case you want to dictate notes. If you’re interested in sidescrolling platformer games, you should know that Ori and the Blind Forest: The Definitive Edition is now available on Windows 10. This edition features “new areas, music, story elements, fast travel options, and new abilities”, as well as support for cross-save between the PC and Xbox One versions. The game costs $19.99 , but if you already own the original game on Steam, you can upgrade to the Definitive Edition for only $4.99. It is a 3.6 GB download, so keep that in mind if you’re on a metered connection. Another game to land on the Windows Store this week is called Los Aliens , and is the latest release from Game Troopers, the makers of several successful games – such as Make it Rain: The Love of Money, Tiny Troopers, and Overkill. Kids will definitely enjoy this puzzle game, where they can take the Los Aliens crew into a deep space adventure. The game is free, but there are a number of in-app purchases that unlock additional features. Lastly, the Out There: Ω Edition is now available for Windows PCs, tablets, and smartphones. Originally developed for iOS and Android by Mi-Clos Studio, the game has received critical acclaim for its blend of resource management and interactive story-telling, and Windows 10 users can now buy it for $4.99. There are several YouTube clients in the Windows Store, but a new arrival is quickly shaping up to be the best of them all. The app is called Explorer for YouTube , features a simple layout that follows the Windows 10 design guidelines, and offers pretty much everything you’d expect from a YouTube client. You can search for videos, playlist, and channels, sort by various filters, select your preferred video quality, cast to a bigger screen, or save videos for offline viewing – but no speed controls, and the app seems to crash on certain videos if you change quality without pausing the video first. The app is free for basic use, but “pro features” require a $2.80 in-app purchase – although there’s no indication of what those may be – all the features mentioned above work uninterrupted in the free version. British Windows fans will be happy to know that UK mobile network operator giffgaff is currently working on a UWP app. The carrier spoke exclusively to Neowin, and revealed that Ian Morland – the developer behind the third-party “ my giffgaff ” app - was hired to work on the official app. You can check out some screenshots and read more about the new app here. Given the recent news on Microsoft updating Office Lens on iOS and Android , Windows users are eagerly waiting for an update to the Windows app – and it might not be long before we see one, as there are screenshots circulating on the web that show what appears to be a UWP Office Lens app . According to a report from WindowsBlogItalia, Microsoft might be working on bringing the Health app to PCs , along with a significant update to the mobile version. Lastly, owners of the Surface RT might be in for a pleasant surprise. A developer over at XDA- Developers - who uses the nickname 'black_blob' – has revealed that he’s working on an unofficial Windows 10 Mobile ROM that could run on the Surface RT , thus offering a true upgrade to Windows 10 for the device, instead of the placebo offered by Microsoft in September last year. If this project proves successful, the developer wants to spread the love to other Windows RT devices, too. We’ll just have to wait and see, but this could mean that people who own a Windows RT device might soon be able to run UWP apps. We'd like to know: What apps do you need that are not yet available on Windows 10? Do you own a Windows RT device? Sound off in the comments section below. This isn't everything that happened in the world of tech this week, so if you're looking for the big picture, our 7 Days feature will paint it for you. There is also plenty of discussion brewing in the forums on a wide range of topics, so head over there and join the buzz. 2016-05-02 07:56 Adrian Potoroaca

Total 49 articles. Created at 2016-05-03 06:03