Weekly Wireless Report WEEK ENDING November 21, 2014
Total Page:16
File Type:pdf, Size:1020Kb
Weekly Wireless Report WEEK ENDING November 21, 2014 INSIDE THIS ISSUE: This Week’s Stories Data Caps Don’t Work: Thoughts For The Future Of Cellular THIS WEEK’S STORIES Network Congestion Data Caps Don’t Work: November 20, 2014 Thoughts For The Future Of Cellular Network Congestion We are reading, posting, watching, and streaming more than ever before, but cellular network data has not kept pace—in fact, it’s gone backwards (as those 44 percent of AT&T customers clinging to their Firefox Drops Google For grandfathered unlimited plans can readily attest to). While our need for and use of cellular network data Yahoo Search has continued to increase, carriers have opted to manage usage and congestion through data caps, which can take a variety of forms depending on carrier. For the most part, data caps as they stand now PRODUCTS & SERVICES are not about alleviating network congestion as carriers claim; they’re about profit. Carriers know this, consumers know this...heck even the head of the FCC knows this, judging from FCC Chairman Tom This App Can Tell Which Brew Wheeler’s July 2014 letter to Verizon CEO Dan Mead: “It is disturbing to me that Verizon Wireless Is Right For You would base its ‘network management’ on distinctions among its customers’ data plans, rather than on network architecture or technology.” Snapchat Teams Up With Square To Offer Money-Transfer Feature Yes, in some cases data caps can help prevent the most excessive overuse of a limited resource, such as the top 1 percent of mobile data subscribers that generate 10 percent of mobile data traffic. Network EMERGING TECHNOLOGY congestion itself is very real, and with estimates forecasting 15.9 exabytes per month of global data traffic in 2018 (that’s 20 billion GB a month aka over 7,700 GB being used every second for those Dropbox Carousel Comes To playing along at home), it’s an issue that’s only going to grow more and more important. This alone is iPad And Web Today, Android why crude approaches to network management like data caps need to change, and quickly. How we Tablets Soon use, how we view, and how we’re delivered data are all rapidly changing - monitoring and measurement need to advance at the same pace. IBM Verse: Can It Trump Google Inbox? In the wake of Sprint's "double the high-speed data" promotion (and the subsequent responses of AT&T MERGERS & ACQUISITIONS doubling their data, Verizon doubling their data, and Sprint doubling its data again), traditional views on the price of cellular data and the need for data caps are shifting. “Data” in the abstract seems more arbitrary than ever. If data caps were about managing congestion, how were they all able to increase at Former BlackBerry CEO Heins once? Did the network capacity magically double overnight? Named Chairman Of Startup Powermat Leading rhetorical questions aside, it’s clear data caps for congestion are the wrong tool for the job, but INDUSTRY REPORTS what’s interesting is that carriers don’t even have the right tools to begin with. In the ideal scenario, network traffic evaluation would be done in real time at the individual cell sites and only slow down Sprint Makes Dramatic Jump users who were hogging bandwidth at that exact moment. But that is not how carriers handle it. In LTE Coverage In Q3, With T-Mobile Not Far Behind Verizon, AT&T, T-Mobile, and Sprint claim they only throttle the top 3-5 percent of users (as measured Will Apple Soon Be Worth $1 by monthly data consumption), and once you’ve gone over your monthly cap you’re at risk of being Trillion? throttled if you enter a congested area. While this strategy is great for profits, it’s a fundamentally broken way to properly handle a network. A cellular network doesn’t care about past usage patterns of customers or how much data they’ve already used that month, it cares about how many people are trying to access how much data from it right now. Even the carriers’ seldom used but seemingly more technical approaches of peak time management, concurrent user thresholds, and bandwidth thresholds aren’t up to the task of properly handling network congestion. The underlying problem in all three approaches is that they attempt to handle congestion through educated guesses based on network proxies rather than actually monitoring the network itself. Peak time management uses time of day guesses; concurrent users bases it on number of people on the network, not how much they’re actually using; and bandwidth thresholds operate under the misguided assumption that link/resource capacity has a fixed maximum, which results in guesses for congestion threshold levels (e.g. “apply management when traffic on this link exceeds 72 Mbps”) rather than Connect with Us www.ksrinc.com P a g e | 2 dynamically adjusting to its capacity in that moment (e.g. “apply management when this link exceeds 90 percent of its current capacity”). The main problem with proper and effective cellular network congestion management is not awareness—carriers understand these methods aren’t the best tools for the job—it’s capability. Legacy solutions were either prohibitively expensive, time-consuming, or didn’t work at scale, and as such carriers used the cheaper and easier methods outlined above. And to a degree, those techniques worked back when there was less cellular network traffic and consumers didn’t know what to expect from carriers, but the volume of data and our demands on the network are changing. It’s time the solutions for managing that data did as well. Fortunately, monitoring technology has not stood still and today's cutting-edge solutions are better placed to support a more rational approach to congestion management. The key need is for granular monitoring of dynamic demand patterns from users, and of congestion conditions within the network itself. These requirements couldn't be met in the days when operators had to rely on coarse-grained observations of total traffic load. But today's technology enables real-time monitoring of data usage on a per-user basis, at timescales down to seconds or below. Solutions are also available for the problem of accurate real-time congestion measurement, for example by tracking user data as it moves across and between networks, and detecting any long transit times or inadvertent drops due to overloaded bottlenecks. Monitoring systems can even detect particular patterns of usage that are more likely to contribute to congestion than others – a bit like spotting slow 'platoons' of cars on the freeway that hold up other drivers. When you put these techniques together it's clear they give operators more than sufficient visibility to dynamically detect congestion conditions, and react intelligently in a manner that correctly accounts for actual user activity. The days when lack of monitoring capability could be offered as an excuse for clumsy congestion management are drawing to a close. While cellular network congestion can currently be semi-contained with the sledgehammer approach of data caps, the amount of people in the world with connected devices will only continue to grow (to the tune of 500 million more smartphones sold globally in 2018 than now) and with it the need for more refined methods of network monitoring. When the amount of drivers on the road increased, we didn’t say that after driving 100 miles per month you’re capped at school zone speeds or must pay 3x the normal price for gas. Instead, we adjusted our transportation infrastructure, we incorporated technology like live updating and dynamically priced toll roads, and we worked towards more opportunities and innovation in public transit. It’s time for us to take a similar approach with network congestion. The solution of tomorrow needs to be able to really and truly monitor network congestion (rather than using byproducts of congestion as proxies or estimates based on past data) and, more importantly, it needs to be able to do this in real time. wirelessweek.com Firefox Drops Google For Yahoo Search November 20, 2014 Mozilla takes up with new Firefox search partners that it believes are better aligned with its values of “Mozilla has bitten the hand choice and independence. that fed it. After a 10-year Mozilla has bitten the hand that fed it. After a 10-year partnership, Google Search next year will no partnership, Google Search longer be the default search engine in Mozilla's Firefox browser. next year will no longer be the default search engine in Mozilla, which develops open-source software for the benefit of the public, has struck a five-year deal Mozilla's Firefox browser.” with Yahoo to make Yahoo Search the default search engine in Firefox. The deal, for an undisclosed sum, comes as Mozilla's three-year deal with Google is about to expire. That arrangement, which made Google the default search engine in Firefox globally, is said to have been worth $300 million annually. Mozilla's deal with Yahoo, however, covers only Firefox in the US. Connect with Us www.ksrinc.com P a g e | 3 Mozilla CEO Chris Beard said the company is ending its practice of having a single global default search provider. "Our new search strategy doubles down on our commitment to make Firefox a browser for everyone," said Beard in a blog post. "We believe it will empower more people, in more places with more choice and opportunity to innovate and ultimately put even more people in control over their lives online." Mozilla's declaration of independence comes six years after Google began competing with Firefox through its Chrome browser, which this year displaced Firefox as the second most popular desktop browser globally, according to NetApplications.