Articles

35 articles, 2016-07-30 18:01

First Windows 10 Redstone 1 2 Build Shows Up Online, Public Release in August (0.02/1)

The Anniversary Update is also referred to as Redstone 1, so the second Redstone update is what comes next, with sources claiming that the public debut will take place in the spring.

Microsoft, however, is ready to start work on it in August, and the first builds are likely to be released to insiders soon after the debut of the Anniversary Update.

And according to BuildFeed, the first Windows 10 Redstone 2 build 14894 was already compiled on July 27, and Microsoft is testing it internally. For the moment, however, nobody can tell for sure whether the exact same build will be released to insiders too, but the chances are that it won’t as Microsoft compiles new builds daily, and a release candidate will only come later when we get closer to the launch date.

What this build does actually is to confirm that work on Redstone 2 has already started and that Insiders will soon receive new goodies to try out on their PCs and mobile devices.

Of course, it’s premature to discuss features and improvements that will be part of the first Windows 10 Redstone 2 builds, but given the fact that it’s just the first version, don’t expect any breaking changes to be released so early. Usually, new features are released throughout the development process, and engineers are now just trying to lay off the foundation for this be possible in the next builds.

In the meantime, the full focus is on the Anniversary Update rollout due on August 2, but it’s good to see that Redstone 2 builds are already there in the pipeline, and insiders will get them in just a few weeks. 2016-07-30 09:14 Bogdan Popa

2 On5 (2016) Passes Through the FCC The Galaxy On5 (2016) was recently spotted at the US Federal

Communications Commission (FCC), which means that it could be launched sooner than later. The was recently spotted on Zauba and on Geekbench , where some specs of the device were revealed.

The data from Zauba shows that Samsung imported several units of the SM-G5700 into India for testing and evaluation. The Galaxy On5 has the model number SM-G550, which makes it very likely that the SM-G570 is the Galaxy On5 (2016).

The Galaxy On5 (2016) will most likely come with 6.0.1, as well as 2GB of RAM and it sports an 7570 chipset. It could also run a 1.4GHz quad-core CPU and it might also come with a 5-inch screen.

Galaxy On5 (2016) will most likely be more advanced than its predecessor, the Galaxy On5, released last year. The smartphone had a 5-inch display with 720p resolution and 1.3GHz quad- core Exynos 3475 processor with 1.5GB of RAM.

In addition, it came with just 8GB of internal storage and the option to expand memory to up to 128GB with the help of a microSD card. The smartphone also came with an 8MP rear camera with autofocus, LED flash and video recording. It had 5MP on the front camera and ran 5.1.

Galaxy On5 (2015) was unveiled together with the Galaxy On7 (2015) and it is believed that upgraded versions of the two would also be released together. 2016-07-30 12:58 Alexandra Vaidos

EU telecom policy could 3 introduce Univeral Service Obligation for broadband

In a change to universal service rules in the EU, broadband internet access could end up being legally guaranteed by national governments who would be forced to foot the bill.

The third section of a provisional proposal for the revision of the telecoms framework by EurActiv specifically covers the concept of universal service.

Interestingly, the leaked proposal made no statements regarding minimum required connection speeds. However, in another leaked document obtained by EurActiv , the European Commission set numerous strategic objectives, including one targeted for delivery in 2025, which stated:

At present, only a handful of countries in the EU, such as Finland and Spain, have universal service obligation in place for broadband internet services. However, while still an EU member for at least the next few years, the UK must continue to seriously assess any new proposals put forth.

While the UK may not be obligated to implement measures scheduled to come into effect after an anticipated exit from the EU, they would at least provide a benchmark against which domestic policy could be compared. In particular, the UK's Digital Economy Bill, which would legislate a 10Mbps USO for broadband, could end up looking paltry in comparison to measures ultimately introduced by the EU.

Source: EurActiv via ISPreview.co.uk 2016-07-30 12:40 Boyd Chan

Honor Note 8 Leaks in 4 Images Just Days Before its Unveiling

The latest information shows that press renders of the upcoming Honor Note 8 surfaced online over at Android Pure. The images were leaked on Chinese social network Weibo and they show that the will come with a fingerprint scanner and rounded edges. The display seems to be bezel less and curved at the top. In addition, the leak confirmed that the device would come with a 6.6-inch 2k display with a density of 443 ppi. The display is Super AMOLED with 105% NTSC color gamut and 70000:1 contrast ratio, according to the post.

Recent rumors point to the fact that the phablet could run Huawei's Kirin 955, an octa-core processor coupled with 4GB of RAM. In addition, it could come with 64GB of internal storage, possibly with the option to expand storage using a microSD card. Rear camera capacity could reach 13MP with F/2.0 aperture, optical image stabilization and RGBW sensor.

The phablet could also have an 8MP camera on the front and it will run Android Marshmallow 6.0. The device is also said to come with USB Type- port and will support 4G LTE, Wi-Fi connectivity and a full metal body. Honor Note 8 will measure 178.8×90.9×7.18mm and weigh 219 grams.

Honor Note 8 is also said to come with a 4,500mAh battery and the company might make it available in three colors, white, black and gold. The will be Mali-T880. Rumors say that the phablet could be sold for a price starting $300 and it is said to be unveiled on August 1. 2016-07-30 10:24 Alexandra Vaidos

Lenovo Vibe P2 With 5.5- 5 Inch 1080p AMOLED Display Leaks in Images

Two live images with the Lenovo Vibe P2 leaked online, showing the smartphone's design, according to GSMArena. The leak also comes with some details on the phone's specs, specifically that it will come with a 5.5-inch AMOLED display with 1080p resolution.

In addition, the size and resolution seem to be the same as for its predecessor, the Vibe P1, with the exception that the Vibe P2 might come with AMOLED display and not LCD panels. The Vibe P2 will be made out of metal and will have a 2.5D curved glass on top of the display.

The fingerprint sensor is located inside the physical home button and the smartphone could come with a 5,000mAh battery, same as the previous model in the series.

The phone's dimensions were also revealed, the Vibe P2 could be 8.5 mm slim, 1.4 mm thinner than its predecessor, although battery capacity is the same.

Previous rumors point to the fact that Lenovo Vibe P2 could come with Qualcomm's Snapdragon 625 SoC with a 2GHz octa-core Cortex-A53 CPU and 4GB of RAM. It will run Android Marshmallow 6.0.1.

Lenovo Vibe P1 was announced a year ago and it came with a 64-bit octa-core Qualcomm Snapdragon 615 processor, coupled with 2GB of RAM and 32GB of internal memory, which could be expanded to 128GB using a microSD card.

Rear camera capacity reached 13MP and it came with auto-focus, LED flash and video recording, while front camera capacity reached 5MP. In addition, the smartphone came with Android Lollipop 5.1 and featured a metal body. It also had Corning Gorilla Glass 3 coating on its display. 2016-07-30 09:40 Alexandra Vaidos

6 The mega 'phablet' reviewed

Xiaomi launched the Mi Max in its home country of China back in May, but it has only now started making the device available in other markets such as India and Taiwan. The large screen 'phablet' was promoted heavily by the company on its Chinese social media accounts and there was plenty of hype, since the device wasn't part of the company's other product lineups such as Redmi, Redmi Note or Note.

The device is priced from Rs.14,999 (~$222) onwards in India, which is a very aggressive price for this feature- packed handset, that is set to compete with the likes of the Moto G4, G4 Plus and the newly launched LeEco Le 2 in the country. Although those phones are much smaller than the Mi Max, they fall in the same price range. Now that the device is finally with us, let's find out whether it's worth the hype or whether it's just a large phone without any real advantage.

The Xiaomi Mi Max is a completely different beast, even in the 'phablet' space, as it features a mammoth-sized 6.44-inch screen. At that size it is closer to a 7-inch tablet, than some of the larger smartphones available today, such as the from Motorola or the from Huawei , or the Nokia Lumia 1520 from the recent past. The screen size isn't for everyone, but there definitely seems to be a demand for such a form factor, as the company has reportedly shipped out over 1.5 million units of the Mi Max since its launch.

Xiaomi isn't known to provide a lot inside the box. The company relies on minimalist white colored packaging with just the bare minimum set of accessories, and the Mi Max is no different. It comes with the standard AC adapter and a micro-USB cable for charging and data transfer.

The company hasn't included a screen protector or a silicone cover, which are two accessories that we've come to expect from Chinese manufacturers that ship out devices to foreign countries. It was a bit disappointing to have not received a set of earphones with the device either, since Xiaomi makes some good ones at a low price point and I was hoping to try out a pair.

The Mi Max features a metal unibody and it is almost seamless, except for the plastic construction around the camera module and at the base of the device on the rear side for repair purposes. The device feels very well engineered and I liked its industrial design a lot. Along the sides, the chamfered edges felt just right and Xiaomi hasn't gone overboard with them.

Although the device is very large, it fits in the hand comfortably owing to its slim profile and thin bezels. Xiaomi has done a great job in keeping the bezels to a minimum, making the smartphone much more pocket-able than I had imagined.

Every bit of the device has a premium look and feel, right from the volume rocker and the power button to the loudspeaker grille at the bottom. The attention to detail and symmetry is also great for a smartphone at its price point. The front of the device is completely covered by the 2.5D curved glass, which houses the three capacitive keys at the bottom, the 5- megapixel camera at the top along with a notification light, and proximity and ambient light sensors.

On the rear side, the Mi Max features the 16-megapixel rear camera with dual-tone LED flash, a fingerprint sensor and the Mi branding. At the top, Xiaomi has included a 3.5mm headphone jack and an IR Blaster, which works with the company's Mi Remote app to control TVs, Set-top- boxes, ACs and many other supported appliances. Overall, the device has a very clean look and is pleasing to the eye.

The company hasn't used a USB-Type C port in the Mi Max, though its flagship smartphone, the Mi 5 features it. Since the standard isn't so widespread yet, it is probably not a bad decision by Xiaomi to stick to the older standard in this mid-range device. In case of the Mi Max, the display is the most important aspect of the device for obvious reasons. The 6.44-inch IPS LCD screen with 1920x1080 resolution isn't the most pixel dense display in comparison to other phones. However, at 342 pixels per inch, it is still a decent offering for its price and the overall quality of the display is good. There isn't much to complain about the color reproduction or the contrast ratio of the IPS panel used by Xiaomi.

The smartphone doesn't offer the punchy colors that most AMOLED screens or higher-end LCD screens do, but users can tweak the contrast settings to get the most out of the display and it seems like a good addition for those who wish to have brighter colors. Other manufacturers should consider making such an option standard in the case of entry-level devices, which often tend to have inconsistent display behaviour.

The Mi Max has good outdoor legibility even at a medium brightness setting, but in direct sunlight it needs to be bumped up to the full setting for complete visibility. Since the device comes with Xiaomi's MIUI ROM, it has plenty of customization options for almost every setting and the display is no different. Users can use the "Reading mode" feature for a comfortable experience while reading for long hours. It increases the color warmth of the screen in user- designated apps, and automatically switches to normal when app is closed.

Unlike the Galaxy Note series or LG G3 / G4 Stylus smartphones, the Mi Max doesn't have a pen or stylus for input, which I thought was a lost opportunity. However, there is no denying that note taking, replying to long emails or editing documents on the go even without any additional input methods, can be carried out effortlessly on such a large screen. As with media consumption as well, the Mi Max shines with its display.

Xiaomi has thrown in support for many 2G, 3G and 4G LTE radios in the Mi Max, but it isn't comparable to the likes of premium smartphones such as the iPhone or the Galaxy S7. The network reception in India was top notch pretty much everywhere where I tested it.. However, I could not try out 4G LTE due to its absence on Vodafone in my city. Users in the US and some European countries will need to make sure that 3G/4G LTE bands for their networks are supported if they wish to purchase a Mi Max. The call quality of the Mi Max was decent as with most modern smartphones; in addition the good reception ensured that there was no cracking or interference. The noise cancelation also worked fine according to callers on the other side.

The loudspeaker's clarity for speech as well as music is quite good, and complements the device's large screen while viewing movies or YouTube. It is one of the louder speakers I have heard and to my surprise, it held up well, even at the highest volume. There was no noticeable crackling or distortion, though added bass could have totally made it a perfect loudspeaker.

In addition to the audio hardware, MIUI provides software options such as an equalizer and the Mi Sound Enhancer for improving the audio output while using headphones or speakers. It also supports profiles for various headphone types and specific Mi models, which can be beneficial for those who own them. I didn't find the enhancement very helpful as it just seemed to boost the treble and introduce a bit of surround, but the equalizer is something that I liked to use.

Both, the front and rear cameras on the Mi Max might seem decent on paper, but are they really any good? Let's find out with a bunch of samples. First off, there's the 16-megapixel rear camera with a f/2.0 five element lens and dual- tone LED flash. The snapper is quite fast, it features phase-detect autofocus and can lock in on objects within seconds. Some of the other features of the camera include continuous autofocus, digital image stabilization, face detection, manual mode with exposure and ISO settings. The camera software has a lot to offer, which some users may enjoy but I prefer having a good auto mode.

The image quality from the rear camera was generally good in daylight and other well-lit areas, but low light scenarios resulted in softer images with a bit of artifacting around the edges. In most cases the color reproduction was very good and plenty of details were preserved. Dynamic range performance probably could have been a bit better in auto mode, since most smartphone photos are taken without much tinkering with the settings.

Under lights at night, the image quality suffers the most, but images aren't completely useless since the only issue is darker reproduction but not loss of details. Video quality, however isn't half as good in the same situations.

The camera supports 1080p video recording at 30 fps. As you can see from the sample, the video is much darker than the photos taken at the same place without any change in actual light. Also, the digital image stabilizer isn't that good, as even a little bit of movement is shaking things up a lot.

The front camera features a 85 degree wide angle lens and manages to capture a big part of the surroundings in the images. It also produces soft images as light decreases, even without a beauty filter, the software processing was found to be smoothening out faces. The color reproduction was still good enough and once again daylight images was the camera's stronger point.

Overall, I felt that the camera hardware was a bit lacking in low lighting conditions, and the daylight performance could use a bit of improved software processing.

Xiaomi phones do not come with stock Android, and never have. The company uses a custom ROM called MIUI on its devices. Over the last six years, MIUI has evolved a lot and is a very capable layer built on top of Android. The Mi Max that was provided to me came with MIUI 7.3, however, newer units of the smartphone are said to be loaded with MIUI 8. For users who dislike anything other than stock Android, this smartphone is definitely not for you, as there is absolutely no trace of stock Android likeness across the entire OS.

However, not having stock Android isn't a bad thing in the case of MIUI, which is an immensely polished shell and provides plenty of customization options compared to 's Android. MIUI has its own set of apps such as Calculator, Gallery, File Manager, Music and Video, which are better suited with MIUI than Google's apps in terms of features, look and feel and overall integration with the UI layer.

The only Google app that our review unit came with was the Play Store, which also has a Mi Store alternative for Chinese users. Getting the device set up for first use isn't a lot different than other Android phones. The homescreen is where the first major noticeable changes from stock Android become visible. Like most Chinese smartphone OSes based on Android, MIUI does not feature an app drawer. It may look a bit like iOS at first, and some may argue that the shell is heavily inspired by Apple's offering, but there are plenty of differences that become noticeable once you get to use MIUI.

Xiaomi's design language for MIUI is very consistent across all of its apps, which I don't think Google itself has managed on stock Android. It felt rather fresh to use the UI, which is completely different from other Android layers such as TouchWiz or the Optimus UI, while retaining the openness and core capabilities of Android. In my time with the device, I did not have any issues getting used to the new user experience or the built-in apps. You can check out the gallery below to get an idea of MIUI.

Other than the UI, some of MIUI's features seem to be better than stock Android. One such example is the centrally located System apps setting, which enables configuring all the pre- loaded apps from a single place. It is a bit like iOS, however, the third-party apps aren't included here. The OS also provides a built-in cache cleaner and task manager, which is quite easy to use. Given its large size, Xiaomi has also included a single-handed mode that can be enabled by swiping from the home button towards the left or right depending on the hand you wish to use. I didn't particularly like this implementation as the rest of the screen was left blank, but it certainly was useful.

The Xiaomi Mi Max comes in three variants based on CPU, RAM and storage combinations, which are as follows:

I tried out the lowest spec version out of these, which features a hexa-core Snapdragon 650 processor with two Cortex-A72 cores clocked at 1.8 GHz and four Cortex-A53 cores clocked at 1.2 GHz, and 3 GB of RAM. The mid- range chip proved more than capable of handling any task thrown at it, and even heavy multi-tasking wasn't an issue owing to the ample amount of RAM on the device.

In benchmarks, the device fared decently and scored more points than some of the flagship smartphones from last year. In day-to-day use, there was absolutely no slowing down in the Mi Max's performance and there was no lag or sluggishness while using any apps. Casual gaming was also a breeze on the smartphone.

The smartphone's fingerprint sensor is also quite fast and I managed to unlock the device nine times out of ten, within a fraction of a second. The sensor's performance also depends on the actual stored data and in case of one fingerprint I had only managed to cover a little bit of my finger, which resulted in failed unlock attempts. Users can store up to five fingerprints in the Mi Max, and the process is much like other implementations of the sensor.

With regards to performance, there's absolutely no complaint against this device and it would be hard to recommend an equivalent performer in its price range. In fact, the only other smartphone that might be comparable is the Redmi Note 3, which features the same hardware specifications in a smaller form factor.

Throughout my testing, I didn't find myself in a situation where I needed to get hold of a charger, even if the battery had dropped below 20 percent. I did have an issue with the battery reporting system of MIUI, as the graphical representation of the usage wasn't as detailed as stock Android. Xiaomi has built its own "Battery usage" report, which separates applications and hardware.

The OS provides a lot of customizability options for battery savers and scheduling. Users can set up profiles based on their priorities, apps, required sensors and schedule the time for enabling or disabling them. I could manage to use the Mi Max for 4 hours on a 15% charge with the basic battery saver profile. Turning off more functions could give an additional one hour of usage.

Charging the smartphone from zero to 100 percent with the bundled charger took up to 2.5 hours. However, since the device supports Qualcomm's Quick Charge 3.0, users can top off the battery in much less time with a compatible charger.

It's really hard to not recommend the Xiaomi Mi Max, as it brings excellent performance at a relatively low price point, while featuring a premium build and a clean design. The only major issue with the device is its video capture capability and low light imagery. In its price range offerings of between $200-$300, the only other options that can be called comparable are the Moto G4 duo, but the higher end variant of the Mi Max is definitely in a league of its own with more RAM and the octa-core chipset.

For someone looking for a large screen smartphone, there currently aren't many options as the Lenovo Phab 2 Pro and Huawei's upcoming Honor 8 Note have yet to launch, and are expected to be costlier than the Mi Max. Sony's Speria XA Ultra is one large screened smartphone that is currently available, but features lesser specifications and is priced at Rs. 29,990 (~$440) in India. 2016-07-30 09:26 Shreyas Gandhe

Microsoft Releases Update 7 for the Best Email Client on iPhones

And because so many people are using it, Microsoft keeps improving Outlook for iOS on a frequent basis, and recently the company has rolled out version 2.4.2.

The new version of Microsoft Outlook comes with better event management, so it doesn’t necessarily change the way you receive emails, but rather the calendar feature. Microsoft says that, after the update, you can create new events more easily than before because you can simply start tying the first letters of your desired location, and suggestions will be offered to complete the field quicker.

Furthermore, you’ll be able to open the Maps app and see where you’re supposed to head to for a meeting, all automatically without having to do all these things manually.

“On top of that, we’ll display the location in a map in your event's details so you have a better idea of exactly where you’re supposed to go. Not sure how to get there? Tap on the map to jump into your favourite Maps app and get directions,” Microsoft explains.

As usual, the new version is available in the App Store and can be downloaded on iPhones and iPads running iOS version 8 or later. 2016-07-30 09:24 Bogdan Popa

Learning Tools for OneNote 8 enters general availability, supports more languages

Since its initial release back in 2003, OneNote has gained popularity with some of it attributable to Microsoft making it completely free on both PC and Mac in addition to the free apps for mobile devices. While OneNote is best known for storing and organizing notes, it can now also help everyone, including gifted learners and those with learning differences, improve their reading and writing skills.

After being launched as a preview six months ago , 'Learning Tools for OneNote' is now generally available in English. This version of the add-in includes a number of additions and refinements, including:

These improvements come in addition to the 'Immersive Reader' feature already present in the preview which could break words down into syllables and highlight nouns, verbs and adjectives. The add-in still includes a reading comprehension mode which highlights verbs and their dependent sub-clauses.

Learning Tools for OneNote is compatible with both OneNote 2013 and OneNote 2016 with the add-in available here. Otherwise, those who already have the preview of Learning Tools for OneNote installed can simply click the update button in the Learning Tools tab from within OneNote.

Source: Microsoft Office Blog 2016-07-30 09:10 Boyd Chan

Internal 9 Build Rolled Out to Nexus 6P User

A post on Reddit (via 9to5Google ) reveals that user brianmoyano removed his Nexus 6P smartphone from the Android Beta Program, because of battery issues. This would have meant that the user was set to receive a back update to Marshmallow.

Instead, he receive a confidential internal OTA with an updated build of Android Nougat 7.0. The Android Nougat Developer Preview 5 had the version number NPD90G and this internal version is NRD90M. The user took screenshots which reveal that the OTA was confidential and internal only. The images also show that New York Cheesecake was the working codename for Nougat. The internal build had a size of 49.5MB and the user didn't see many major changes since it was a smaller update.

The Android N description that accompanies the OTA update was also slightly different compared to the beta program, which signals that Google might be releasing the public version soon.

Earlier today, we reported that Google will be rolling out Android Nougat 7.0 next month to Nexus supported devices, specifically , Nexus 6, Nexus 6P, , Nexus 9G, , , or General Mobile 4G () devices. Two new Nexus smartphones, Sailfish and Marlin are also expected to be released in the same period. 2016-07-30 09:07 Alexandra Vaidos

PlayStation VR brochure 10 suggests you'll need lots of room to play

One of the major problems with gaming platforms that involve more interaction than just with a controller is that they need plenty of room to use. Take the Nintendo Wii for example, you need quite a lot of space for two people to compete against each other in many games. It turns out the PlayStation VR system will also need plenty of space to use, 3 metres by 1.9 metres to be exact.

An image was posted online recently showing the official space requirements for the PlayStation VR, it originated from one of Sony’s advertising pamphlets. The image which displays the measurements of the space needed is accompanied by the following text:

The pamphlet also provides some more information about the PS VR. It mentions it will be able to be worn over glasses and that when you are using PS VR others will be able to get your view as a 2D image on the TV screen via Social Screen.

The device was slated for an October 13 release back at E3 , which was held in June. The device will retail for $399 and is expected to have 50 games before the end of the year. Source: WCCF Tech | Image via Imgur 2016-07-30 08:54 Paul Hill

Microsoft Continues 11 Windows Phone Cuts, Lays Off 2,850 More Employees

Microsoft revealed in May that 1,850 people would be let go from its phone unit, but according to The Register , an additional 2,850 smartphone and sales workers are also being laid off.

The job cut was announced by Microsoft in a 10-K report to the SEC in which it confirms that a new round of layoffs is expected to be completed by the end of June 2017.

“In addition to the elimination of 1,850 positions that were announced in May 2016, approximately 2,850 roles globally will be reduced during the year as an extension of the earlier plan, and these actions are expected to be completed by the end of fiscal year 2017.”

“As of June 30, 2016, we employed approximately 114,000 people on a full- time basis, 63,000 in the U. S. and 51,000 internationally. Of the total employed people, 38,000 were in operations, including manufacturing, distribution, product support, and consulting services; 37,000 in product research and development; 29,000 in sales and marketing; and 10,000 in general and administration.”

The aforementioned source goes on to reveal that Microsoft started the new round of layoffs only a few days after holding a dedicated event for employees in Orlando, Florida, where Justin Timberlake was the special guest to deliver an on-stage performance just for the Softies.

Leaving this little detail aside, it’s very clear that this new round of layoffs shows that Microsoft is still looking for a more effective way to continue work on Windows phones, especially because Redmond is believed not to be planning any new Lumia devices anymore. The only model that could be launched by Microsoft is the Surface Phone, which, according to sources, should see daylight in spring 2017. 2016-07-30 08:52 Bogdan Popa

Waze Beta Reminds 12 Parents Not to Forget Children in the Car

However, the latest feature is unusual, to say the least. seems to think that its users might have trouble remembering things and might forget different objects and even people in the car, more specifically, kids.

These things do happen from time to time, but Waze believes that this problem could potentially affect millions of its users since it has introduced a new feature that will provide notifications so that parents won't forget their kids in the car. The new feature in beta version has been spotted by Geektime.

Still, the feature is quite useful as it can help parents avoid accidents and unfortunate events. The new Waze feature is available in the upcoming beta version of the app. It will allow users to set a custom message and receive it as a notification when they reach their destination.

The feature is available under app settings, underneath the options to prevent auto-lock and keep Waze on top. In addition, the reminder can be disabled at any time from the shortcut in the pop-up window.

Just last month, Waze received another update that would allow users from certain cities to avoid difficult intersections and thus streamline traffic and avoid traffic jams. The new feature won't affect routing or ETA, as it was designed to balance the latter and limit the number of difficult intersections. 2016-07-30 08:51 Alexandra Vaidos

United Kingdom: Now TV 13 launches contract-free broadband offer

Now TV , run by the UK-based company Sky, has launched a new choice of broadband and home phone packages. The main selling point of the offerings is that you can pay a £40 setup fee for the option to go contract-free, meaning you can move away from Now TV to another provider any time you like. The offerings are reasonably priced and fairly competitive with other providers in the UK. With the Entertainment Pass – which offers 11 paid channels plus, catch-up and 250 on-demand box sets – and Brilliant Broadband (15.1 – 22.0Mbps) the combo costs a total of £27.98, including line rental. When you sign up, there is, of course the one-off £40 set up fee if you choose to go contract-less.

The combo is topped off with a free Roku powered Now TV set-top box allowing you to watch the box sets you’ve purchased in the combo deal. Additionally, you’ll be able to use the box to watch live Freeview (requires an aerial), BBC iPlayer, ITV Hub, All 4, and Demand 5. It also allows you to pause and rewind live TV by up to 30 minutes. The broadband options are Brilliant Broadband (17Mb), Fab Fibre (up to 38Mb), and Super Fibre (up to 76Mb). As for the TV passes there is the Entertainment Pass, Sky Cinema Pass, Sky Sports Month pass, and the Kids pass.

Finding a contract-less broadband offer in the UK is pretty hard to come by, making Now TV's offer fairly unique. Most contracts in the country typically last 12 - 18 months but it's possible to find shorter and longer offers.

Source: B roadband Choices 2016-07-30 08:38 Paul Hill

Wireshark 2.0.5 Released 14 as the World's Most Popular Network Protocol Analyzer

This is the fifth maintenance update to the Wireshark 2.0 series, which is currently the latest stable and most advanced branch of the open source project used by numerous security experts around the globe for analysis and troubleshooting of network issues, with the ultimate goal of hardening the security of their networks.

According to the release notes , Wireshark 2.0.5 is here to resolved over 20 issues reported by users since the previous maintenance update, version 2.0.4, as well as to update the protocol and capture file support. It's worth noting that Wireshark 2.0.5 promises to patch a total of nine security vulnerabilities. These security patches fix issues and crashes that occurred with various core components of Wireshark, including the CORBA IDL dissector, LDSS dissector, and RLC dissector, as well as PacketBB. Infinite loop issues with MMSE, WAP, WBXML, and WSP were addresses as well, along with long loop problems with OpenFlow and RLC.

The Wireshark 2.0.5 maintenance release updates the built-in protocol support for 802.11 Radiotap, BGP, CAN, CANopen, H.248 Q.1950, IPv4, IPv6, LANforge, LDSS, MPTCP, OSPF, PacketBB, PRP, RLC, RMT-FEC, RSVP, RTP MIDI, T.30, TDS, USB, WAP, WBXML, WiMax RNG-RSP, WSP, and the pcapng capture file.

If you want to know what exactly has been changed in the fifth bugfix release of the world's most popular network protocol analyzer software, we recommend you check out the full changelog attached below. In the meantime, you can download Wireshark 2.0.5 for GNU/Linux , Mac OS X , and Microsoft Windows right now via our website. 2016-07-30 01:38 Marius Nestor

15 Using Angular Typeahead

A program's requirement may be to allow the user to key in some characters and, based on the keyed- in value, display the matching results. This is where we use typeaheads to display results based on the entered text in a text box. In this article, we will populate the typeahead dynamically from a Web service and the data displayed will be in a tabular format with headers. This is all implemented using Angular JS.

This article assumes that a Web service already exists and that it returns the data in a JSON format. For this example, we will use an API that returns the country name and the capital based on the characters keyed in by the user.

To implement typeahead, we use the angular directive "typeahead. " The syntax is as follows:

Script 1

There are two main components in the preceding syntax:

Uib-typeahead again has two parts:

Method getCountry($viewValue) retrieves the data from the Web API. The definition of the function is as shown next:

Script 2

$viewValue is a default parameter provided in Angular that contains the text that the user keys in. The 'getCountry' function takes the keyed-in value as a parameter and calls the REST API. The response returned from the Web API is the function's return value.

Because REST API is an asynchronous call, there are two return statements: one when the Web API is invoked— $http.get —and one within the success callback of the Web API method. This ensures that the data is passed on to the "typeahead-popup-template-url" for display.

Method countryTypeAheadLabel formats the data or the country selected. Because, in this example, country is an object, it cannot be directly bound. If multiple properties are to be displayed for the selected data, this function is used for formatting the data.

Script 3

The countryTypeAheadLabel function verifies if any country is selected. If selected, it concatenates and returns the country name and the capital.

Next is the " typeahead-popup- template-url ". This is an ng-template that displays the result in a tabular format.

Script 4

A template is defined in a script tag with type as 'text/ng-template'. Lines 8-17 define the header columns for the tabular display data. Lines 18-27 display the data. The content is stored in an collection named 'matches' and the syntax to access the object properties is "match.model. ". In the preceding example, we are displaying the country name and the capital.

The output looks as shown in Figure 1 when the user is keying in the text.

Figure 1: The output from keying in the text

Once the user selects one of the countries, it is displayed as shown in Figure 2:

Figure 2: After a country has been selected

In this example, we are displaying only two columns: the country and the capital. It can be used to display multiple columns by modifying the template; styling it will bootstrap classes.

In this example, the data was retrieved asynchronously and bound to the typeahead. The Web API is written such that it can take the keyed-in value as an input parameter and return the data; in other words, the filtering for the data is done within the Web API. The data is then bound to the template to display the data in a tabular format.

The Web service is invoked for every letter keyed in. This can potentially be a costly operation if the Web API takes time to return the data. Optimization techniques such as server side caching can be considered to improve the performance. https://angular-ui.github.io/bootstrap/ 2016-07-30 00:00 Uma Narayanan

16 Advanced Concepts of Java Object Serialization

Serialization literally refers to arranging something in a sequence. It is a process in Java where the state of an object is transformed into a stream of bits. The transformation maintains a sequence in accordance to the metadata supplied, such as a POJO. Perhaps, it is due to this transformation from an abstraction to a raw sequence of bits that it is referred to as serialization by etymology. This article takes up serialization and its related concepts and tries to delineate some of its nooks and crannies, along with their implementation in the Java API. Serialization makes any POJO persistable by converting it into a byte stream. The byte stream then can be stored in a file, memory, or a .

Figure 1: Converting to a byte stream

Therefore, the key idea behind serialization is the concept of a byte stream. A byte stream in Java is an atomic collection of 0s and 1s in a predefined sequence. Atomic means that they are not further derivable. Raw bits are quite flexible and can be transmuted into anything: character, number, Java object, and so forth. Bits individually do not mean anything unless they are produced and consumed by the definition of some meaningful abstraction. In serialization, this meaning is derived from a pre- defined data structure called class and they are instantiated into an active entity called a Java object. The raw bit stream then is stored in a repository such as a file in the file system, array of bytes in the memory, or stored in the database. At a later time, this bit stream can be restored back into its original Java object in a reverse procedure. This reverse process is called deserialization.

Figure 2: Serialization

The object serialization and deserialization processes are designed to work recursively. That means, when any object serialized at the top of an inheritance hierarchy, the inherited objects gets serialized. The reference objects are located recursively and serialized. During the restoration, a reverse process is applied and the object is deserialized in a bottom-up fashion. An object to be serialized must implement a java.io. Serializable interface. This interface contains no members and is used to designate a class as serializable. As mentioned earlier, all inherited subclasses are also serialized by default. All the member variables of the designated class are persisted except the members declared as transient and static ; they are not persisted. In the following example, class A implements Serializable. Class B inherits class A; as a result, B is also serializable. Class B contains a reference to class C. Class C also must implement Serializable interface; otherwise, java.io. NotSerializableException will be thrown at runtime.

In case you want to use a single object read to or write from a stream, use the readUnshared and writeUnshared methods instead of readObject and writeObject , respectively.

Observe that any changes in the static and transient variables are not stored in the process. There are a number of problem with the serialization process. As we have seen, if a super class is declared serializable, all the sub classes also get serialized. This means, if A inherits B inherits C inherits D... All the objects would be serialized! One way to make fields of these classes non-serializable is to use the transient modifier. What if we have, say, 50 fields that we do not want to persist? We have to declare those 50 fields as transient! Similar problems can arise in the deserialization process. What if we want to deserialize only five fields rather than restore all 10 fields serialized and stored previously? There is a specific way to stop serialization in the case of inherited classes. The way out is to write your own readObject and writeObject method as follows.

A serializable class recommends declaring a unique variable, called serialVersionUID , to identify the data persisted. If this optional variable is not supplied, JVM creates one by an internal logic. This is time consuming.

Compile to create the class file:

The output would be like what's shown in Figure 3.

Figure 3: Results of the compiled class file

In a nutshell, a serialization interface needs some change with better control in the serialization and deserialization process.

An externalizable interface provided some improvement. But, bear in mind, the automatic implementation of a serialization process with the Serializable interface is fine in most cases. Externalizable is a complementary interface to allay many of its problems where better control over serialization/deserialization is sought.

The process of serialization and deserialization is pretty straightforward and most of the intricacies to storing and restoring an object are handled automatically. Sometimes, is may happen that the programmer needs some control over persistence process; say, the object to be stored needs to be compressed or encrypted before storing, and similarly, decompression and decryption need to happen during the restoration process. This is where you need to implement the Externalizable interface. The Externalizable interface extends the Serializable interface and provides two member functions to override by the implemented classes.

The readExternal method reads the byte stream from ObjectInput and writeStream writes the byte stream to ObjectOutput. ObjectInput and ObjectOutput are interfaces that extend the DataInput and DataOutput interface, respectively. The polymorphic read and write methods are called to serialize an object.

Externalization makes the serialization and deserialization processes much more flexible and give you better control. But, there are a few points to remember when using Externalizable interface:

According to the preceding properties, any non-static inner class is not externalizable. The reason is that the JVM modifies the constructor of the inner classes by adding a reference to the parent class at the time of compilation. As a result, the idea of having a no-argument constructor is simply inapplicable in case of non-static inner classes. Because we can control what field to persist and what not to with the help of the readExternal and writeExternal methods, making a field non-persistable with a transient modifier is also irrelevant.

Serialization and Externalizable is a tagging interface to designate a class for persistence. The instances of these classes may be transformed and stored in byte stream storage. The storage may be a file on disk or database, or even transmitted across a network. The serialization process and Java I/O stream are inseparable. They work together to bring out the essence of object persistence. 2016-07-30 00:00 Manoj Debnath

17 What Is Hazelcast?

Hazelcast is the Leading Open Source In-Memory Data Grid: Distributed Computing, Simplified.

The main areas where Hazelcast can do really wonders are: in-memory data grid, caching, in-memory NoSQL, messaging, application scaling, and clustering. To start, Hazelcast needs a set of configurations, such as the port number of the first node, activate/deactivate multicast support, and so forth. Besides the set of default configurations that are stored in the hazelcast-default.xml file that comes with the Hazelcast JAR, we can provide a custom set of configurations via XML or programmatically.

Hazelcast supports declarative and programmatic configurations. Further, we will examine each of these cases via the most common usage cases.

When Hazelcast starts, it first checks the value of the hazelcast.config system property. Basically, via this system property, you specify the path where Hazelcast should look for the hazlecast.xml file (the path can be a normal one or a classpath reference with the prefix CLASSPATH ). The hazelcast.xml file contains the custom configurations for Hazelcast. By default, this file should be located in the current working directory or the classpath. A simple hazelcast.xml file can look like this:

To start a Hazelcast instance with the default configurations, we can simply write two lines of code, as follows:

Notice that the above approach will ignore a potential existing hazlecast.xml! The Config object represents all the configurations to start a HazelcastInstance , and because we pass an "unused" cfg , it will contain the default configurations for Hazelcast. Most probably, you will use the preceding approach when you want to programmatically alter the default configurations. In some examples, you may also see the following approach:

Notice that the above approach will start with default configurations ONLY if there is no hazlecast.xml found! You can see such an application in the code that is delivered with this article, under the name HazelcastXML. Please refer to the download link at the end of this article. This application uses the above hazelcast.xml. When you will run this application ONCE, notice that the first node is started on port 6005, instead of the default one, 5701:

Figure 1: The first node in the cluster started at custom port 6005

If you run this code once more, a second node starts and these two nodes will form a cluster; this is possible because Hazelcast allows more than one instance (node) to be created on the same JVM. Check out Figure 2:

Figure 2: Two nodes of a cluster

The source code necessary for this is embarrassingly simple:

Another approach to configuring Hazelcast consists of using the XmlConfigBuilder class. Via this class, we can load the configurations from an XML file that can be referenced via a URL, file path, or input stream. For example, we can point to a location on disk (file path) as shown below:

The complete example is named HazelcastXmlConfigBuilder.

As was said earlier, the Config class can be used to provide a programmatic configuration approach. For example, we may want to programmatically provide the same configurations as we did above via hazelcast.xml. This can be accomplished as demonstrated below:

The complete example is named HazelcastConfig.

We can combine declarative and programmatic configurations. For example, let's suppose that we have a an XML configuration file, myconfigs.xml :

We load this file via XmlConfigBuilder and add the join configuration programmatically. Here it is the relevant code:

The complete example is named HazelcastMixt.

The Config instances can be shared between threads, but should not be modified after they are used to create HazelcastInstance s. Hazelcast does not copy configurations to each node. So, if one wants to share a data structure, it needs to be defined in every node exactly the same.

Hazelcast comes with several data structures to store data, such as IList , IMap , MultiMap , and ISet. In this section, we will talk about IMap.

The concurrent, distributed, observable, and queryable map is the Hazelcast IMap. Here it is a simple usage of it: We declare a IMap and pre- populate it with some random data. Moreover, we don't explicitly specify keys; we use the Hazecast key generator ( IdGenerator ), which is capable of generating random keys across nodes:

If the map products exists (for example, it was configured and initialized in XML), Hazelcast will use that ma. Otherwise, it will create it as an empty map (as above).

The data stored in this map will be available for all nodes in the cluster. For example, on a single node, you may see something like in Figure 3, left side, while on two nodes, you will see something like in Figure 3, right side:

Figure 3: Hazelcast distributed map

The complete example is named HazelcastIMap.

When we talk about Hazelcast messaging, we are talking about the mighty IQueue and ITopic. The IQueue is a concurrent, blocking, distributed, observable queue, and a simple example looks like the following:

First, we have the producer that puts entries in the queue:

Second, we have a consumer for our queue:

The complete example is named HazelcastBlockingQueue.

The ITopic is a publisher/subscriber distribution mechanism. Subscribers will process the in the order they are actually published. Check out a publisher implementation:

And, the subscriber:

Figure 4 demonstrates an output with one publisher and three subscribers:

Figure 4: One publisher and three subscribers

The complete example is named HazelcastTopic. Well, if the preceding examples look impressive, you must know that this is just the beginning. Next, you should try to explore more about the amazing power of Hazelcast and read about Hazelcast locking, transactions, JCache support, interacting with a RDBMS, and so on.

You can download all code samples for this article here. 2016-07-30 00:00 Leonard Anghel

18 Introducing ASP. NET Core Dependency Injection

If you developed professional Web applications using ASP. NET MVC, you are probably familiar with Dependency Injection. Dependency Injection (DI) is a technique to develop loosely coupled software systems. ASP. NET MVC didn't include any inbuilt DI framework and developers had to resort to some external DI framework. Luckily, ASP. NET Core 1.0 introduces a DI container that can simplify your work. This article introduces you to the DI features of ASP. NET Core 1.0 so that you can quickly use them in your applications.

To understand how Dependency Injection works in ASP. NET Core 1.0, you will build a simple application. So, begin by creating a new ASP. NET Core 1.0 Web application by using an Empty project template.

Figure 1: Opening the new template

Then, open the Project.json file and add dependencies as shown below: (You can get Project.json from this code download .) Make sure to restore packages by right-clicking the References folder and selecting Restore Packages from the shortcut menu.

Then, create a DIClasses folder under the project root folder. Add an interface named IServiceType to the DIClasses folder. A type that is to be injected is called a service type. The IServiceType interface will be implemented by the service type you create later. The IServiceType interface is shown below:

The IServiceType interface contains a single method—GetGuid(). As the name suggests, an implementation of this method is supposed to return a GUID to the caller. In a realistic case, you can have any application-specific methods here.

Then, add a MyServiceType class to the Core folder and implement IServiceType in it. The MyServiceType class is shown below:

The MyServiceType class implements an IServiceType interface. The class declares a private variable—guid—that holds a GUID. The constructor generates a new GUID using the Guid structure and assigns it to the guid private variable. The GetGuid() method simply returns the GUID to the caller. So, every object instance of MyServiceType will have its own unique GUID. This GUID will be used to understand the working of the DI framework as you will see later.

Now, open the Startup.cs file and modify it as shown below:

Notice the line shown in bold letters. This is how you register a service type with the ASP. NET Core DI container. The AddScoped() method is a generic method and you mention the interface on which the service type is based (IServiceType) and a concrete type (MyServiceType) whose object instance is to be injected.

A type injected with AddScoped() has a lifetime of the current request. That means each request gets a new object of MyServiceType to work with. Let's test this by injecting MyServiceType into a controller.

Proceed by adding HomeController and Index view to the respective folders. Then, modify the HomeController as shown below:

The constructor of the HomeController accepts a parameter of IServiceType. This parameter will be injected by the DI framework for you. Remember that, for the DI to work as expected, a type must be registered with the DI container (as discussed earlier).

The IServiceType injected by the DI framework is stored in a private variable —obj—for later use. The Index() action calls the GetGuid() method on MyServiceType object and stores the GUID in ViewBag's Guid property. The Index view simply outputs this GUID as shown below:

Now, run the application and you should see something like this:

Figure 2: Viewing the GUID

Refresh the browser window a few times to simulate multiple requests. You will observe that a new GUID is displayed every time. This confirms the working of AddScoped() as discussed previously.

There are two more methods that can be used to control the lifetime of the injected object—AddTransient() and AddSingleton(). A service registered using AddTransient() behaves such that every request for an object gets a new object instance. So, if a single HTTP request requests a service type twice, two separate object instances will be injected. A service registered using AddSingleton() behaves such that all the requests to a service are served by a single object instance. Let's test these two methods, one by one.

Modify Startup.cs as shown below:

In this case, you used the AddTransient() method to register the service type. Now, modify the HomeController like this:

This time, the HomeController has two parameters of IServiceType. This is done just to simulate two requests to the same service type. The GUIDs returned by both the object instances are stored in the ViewBag. If you output the GUIDs on the Index view, you will see this:

Figure 3: Viewing the GUIDs on the Index view

As you can see, the GUIDs are different within a single HTTP request, indicating that different object instances are getting injected into the controller. If you refresh the browser window, you will get different GUIDs each time. Now, modify Startup.cs and use AddScoped() again to register the type. Run the application again. Did you notice the difference? Now, both the constructor parameters point to the same object instance, as confirmed by the GUIDs.

Now, change Startup.cs to use the AddSingleton() method: Also, make corresponding changes to the HomeController (it will now have just one parameter) and the Index view. If you run the application and refresh the browser as before, you will observe that for all the requests the same GUID is displayed, confirming the singleton mode. 2016-07-30 00:00 Bipin Joshi

19 Exploring Java BitSet

BitSet is a class defined in the java.util package. It creates an array of bits represented by boolean values. The size of the array is flexible and can grow to accommodate additional bit as needed. Because it is an array, the bit values can be accessed by non-negative integers as an index. The interesting aspect of BitSet is that it is easy to create and manipulate bit sets that basically represents a set of boolean flags. This article shall provide the necessary details on how to use this API with appropriate examples in Java.

The BitSet class provides two constructors: a no-argument constructor to create an empty BitSet object and a one-constructor with an integer argument to represent number of bits in the BitSet .

The default value of the BitSet is boolean false with an underlying representation as 0 (off). The bit position in the BitSet array can be set to 1 (on) as true with the help of the index of the bit represented as an argument to the set method. The index is zero-based, similar to an array. Once you call the clear method, the bit values are automatically set to false. To access a specific value in the BitSet , the get method is used with an integer argument as an index.

The class also provides methods for common bit manipulation using bitwise logical AND, bitwise logical OR, and bitwise logical exclusive OR with and , or , and xor methods, respectively. For example, assume that there are two BitSet instances, bit1 and bit2. Then the statement, will perform bitwise logical AND operation, Similarly, will perform bitwise logical OR operation and will perform a bitwise logical XOR operation. The result will be stored in bit1. If there are more bits in bit2 than bit1, the additional bits of bit2 are ignored. As a result, the size of bit1 remains unchanged even after the result of the bitwise operation is stored. In fact, the bitwise operations are performed in a logical bit-by-bit fashion. The size method returns the number of bits of space actually in use by the BitSet. There is another method, called length, that returns the logical size of the BitSet ; that means the index of the highest set, bit + 1. Two BitSets can be compared for equality with the equals method. They are equal if and only if they are the same bit by bit.

Let's implement a simple algorithm called Sieve of Sundaram , a variation that's more efficient than Sieve of Eratosthenes , to find out a list of prime numbers within a range using BitSet .

Apart from finding prime number by brute techniques, the algorithm "Sieve of Eratosthenes" is quite intriguing. But here, we shall implement a variation of that algorithm discovered by mathematician S. P. Sundaram in 1934. Hence, it is called "Sieve of Sundaram. " The idea is to cross out the numbers of the form,

Figure 1: The "Sieve of Sundaram" from a list of integers ranging from 1 to n. The rest of the numbers are incremented by 1. Finally, we get the list containing all the odd prime numbers below 2n+2 (all except 2). The main difference between Eratosthenes' method and Sundaram's method is that Sundaram removes numbers that are:

Figure 2: The "Sieve of Sundaram" removes numbers from the equation

This is the key variation that led to an efficient Eratosthenes sieve algorithm. I'm skipping the details, as they is out of scope here. Interested readers may refer here: for more details on these algorithms.

BitSet is convenient class for bit manipulation. Individual bits are represented by boolean true and false values arranged in an array fashion. The method set sets the specified bit to a "on" state and the clear method sets the specified bit to the "off" state. The method get returns true if the bit is on and false if the bit is off. The and , or , or xor method of BitSet performs a bit- by-bit logical AND, OR, and XOR between BitSets, respectively. The result is stored in the BitSet instance that invoked the method. 2016-07-30 00:00 Manoj Debnath

Unit Tests for Apps 20 Written in AngularJS Using Karma and Jasmine

By Andrey Zhilinsky

Any programmer using or starting to use AngularJS for unit testing knows there is an apparent lack of reliable step-by-step resources to help them on their daily journey. When initially working with the framework, many programmers have to start by consulting independent sources and looking for ways to solve issues using dispersed internet resources. Finding detailed instructions on how to do a certain task on a real project seemingly don't exist. Of course, there are open forums where developers talk about common problems and possible solutions, but they are basic guidelines at best. When you encounter issues that require in- depth analysis of available solutions, you are bound to waste time doing the research yourself.

This is why some of the conclusions made from direct experience are shared in the following article. This guide is different from others because it is based on a real project developed for a contemporary market and relies on the latest version of AngularJS. If you are using AngularJS, or plan to start in the near future, please read on.

The stack of technologies is the following: The Web application itself is written in AngularJS, the testing runner is Karma, and the testing framework is Jasmine. They were chosen for the project as the most widely used and popular frameworks, making them reliable and easier to work with. Another benefit is their constant evolution and development, making each a flexible solution for changing times.

The most usable way to mock a service method is by using spyOn. You need to inject the service to mock and define what the mock should do. It can call through to the real function or call the defined fake method.

The spy could be tested by the following methods: or

This method is very helpful to mock properties of navigator, window, and document objects, and also DOM elements. or Working with a real $document is not a good idea because there are Karma scripts in the document's body, as well as other essential runner content, which you can damage during the test. You have to be very careful and clean up after each test. A better way is to create an empty document and perform testing on it.

Angular's $window object is simply a reference to the global window object. The goal is easy mocking of the window object.

You need to use jasmine.createSpy() to mock the function. The callThrough() and callFake() could be used in the same way as in spyOn().

The filter could be mocked in the same way as method factories. Note that you need to add 'Filter' to the name. We use btfModal in this project for modal dialogs. The following code shows how to mock and test it.

Sometimes, to test a parent directive you need to mock the child one. You can replace the directive definition in the module.

Nested describe() methods are very useful. For instance, you can use them to test specific parameters.

First, you need to mock the async method call. Pay attention to the fact that you can simulate a promise rejection.

Second, the done argument should be included to the method. The done() function is passed to it() , beforeEach() , and afterEach().

Call it after all processing is complete. Note that you need to call $timeout.flush() all pending tasks.

You need to inject $compile service to render the directive. Also, you need $rootScope to create the directive's scope.

Please remember that you need to initiate a digest cycle after directive rendering. Also, you need to do that after directive's scope change.

Use the triggerHandler method to simulate an event.

It is a regular task when you need to simulate an event and check how it was processed.

Another common test is checking if the handler is bound to the event. You can simply use $broadcast or $emit to fire an event. The only peculiarity is that you need to start the $digest cycle by calling $apply() of the element's scope or $rootScope.

To test form validation, you need to set a control value. You can do that by using $setViewValue.

Sometimes, you just need to skip loading the resources. To do that, you can use the following code:

As for static content such as images, add the following to the karma.conf.js:

Also, you need to add the proxies option. Note that you need to append Karma's base path "/base/src" to the static files path.

AngularJS was created with easy testing features in mind; therefore, apps written in the framework are intrinsically simple to test. Adding Karma and Jasmine to the mix allows you to effortlessly reach 100 percent unit test coverage of a frontend app of any complexity. Hopefully, this short guide helps you achieve that in your day-to-day programming job.

Andrey Zhilinsky is a senior developer at Itransition, an international software development and systems integration company, where he is responsible for projects focused on automation in small and midsized enterprises, mobile in automation, and generation of dynamic content and Web forms using ASP. NET MVC, Entity Framework, and AngularJS. He graduated from Belarusian State University with a degree in economic cybernetics and worked in a division of the National Bank before programming for clients in Germany, the U. K., the U. S. A., and Canada. For more information, please visit http://www.itransition.com/ .

This article was contributed for exclusive use on Developer.com. 2016-07-30 00:00 www.developer

The Key to AI Automation: 21 Human-Machine Interfaces

The 4 th industrial revolution is undoubtedly artificial intelligence systems and the future is definitely here, even though it doesn't look like an episode of "The Jetsons" or "The Terminator" just yet. The current generation of artificial intelligence technology is most effective in the capacity of augmenting human intelligence. This augmentation requires new thinking interaction between machine intelligence and the humans who work with the machine intelligence.

Virtually every operation a bank does takes a set of data as an input; some sort of judgment is performed, and then the execution of the action is done digitally. If a customer makes an address change request, a risk judgment is made; then, the address change is done. A loan origination is a series of provided data sets, judgments, requests for more data, then the composition of digital documents to be executed at closing.

Machine learning algorithms such as AzureML or Amazon Machine Learning can take in data sets and observe the outcome scores or judgments that were made and produce a model that can predict the outcome. These learning judgments can be defined outcomes such as whether a loan is performed or to observe the judgments made by people to reproduce the same judgments on future data sets.

Other artificial intelligence products— like Microsoft's Cortana, Google DeepMind, or IBM Watson—can work on more free-form problems. These types of systems are well adapted to interfacing with people and translating a chaotic world into a series of more solvable problems.

The state of the current technology isn't perfect, as Microsoft recently demonstrated when its Tay AI went on a racist genocidal rant on Twitter. Since this incident, movies like Terminator can be interpreted in a new way. What if the first AI does become self-aware and learns about humanity from reading YouTube comments? Microsoft learned their lesson with Tay and made adjustments to more closely monitor and adjust how the machine is learning to avoid instances like this happening in the future.

Machine learning models are more advanced compared to previous models because they can be evolved over time by re-testing the model and making adjustments as data changes over time. The challenge this can present to a bank is if the model starts to evolve in an unintended direction. Car salesmen could learn how you underwrite automotive loans and stretch their client's applications. Fraudsters could observe how you're detecting fraud and adjust what they are doing. With the current state of AI and Machine Learning, human supervision is absolutely necessary. Is your evolving AI bot about to go on a racist rampage? Has your adaptive machine learning algorithm adjusted to become better at underwriting risk of normal borrowers but become more vulnerable to fraudsters?

20 years from now, it is highly likely there will be cars on the road that don't have a steering wheel and will be able to dutifully get you to your destination more safely than any human driver could deliver. That day isn't today… but what is available right now is Tesla's autopilot feature.

Using the autopilot feature is a terrifying experience at first, especially if you start using it on non-interstate highway roads. As a seasoned driver, the idea of giving up control of the steering, acceleration, and braking is nerve wracking.

For a driver to successfully operate this highly sophisticated computer controlled system, the driver needs to understand what the machine knows and what it doesn't. Fortunately, the designers at Tesla understand this and provide a helpful heads-up display that shows what lines it sees on the road, what other cars it is aware of that are around you, and highlights the elements that it is looking at to decide where to go. Sometimes, it highlights the lines on the road to show that it is following the lines; other times, it highlights the car it is following.

This feedback between the human driving the car and the autopilot system is absolutely essential for the hybrid human/artificial intelligence state we are now in. The software is showing you what it is thinking and giving you clues about what it is going to do next before it does it.

Tesla's autopilot builds driver confidence and makes the interaction natural by letting the human operator know what it is doing and why. Would you trust a robot to hold all of your money and just trust it knew what it was doing?

Virtually every wealth management institution has either built or licensed a robo-advisor platform. The core of these platforms typically employ the same basic strategy of rotating ETFs to manage exposure to different classes of investments for diversification purposes and employ a tax loss harvesting strategy. Some robo-advisors are opaque and come across looking like a single account and leave you to trust that it knows what it's doing while it shows you how it is performing relative to its benchmarks. This would be like a self- driving car with a single green light that says, "Trust me, we're not about to dive into oncoming traffic. " Even if the technology is perfect, are you going to blindly trust it without some kind of reassurance that it knows what it is doing?

The better robo-advisors provide detailed interfaces that describe the trades they are performing and why they are performing those trades. Is this ETF being sold solely to re-balance domestic versus foreign equities? Is this ETF being sold with the intent to buy another ETF to employ a tax loss harvesting? The explanation and visibility into what the robo-advisor is doing is key to building client confidence in the software and helping them understand the value they are receiving from using the software.

Over time, clients will learn to trust the software and understand the value it brings them. With this trust, they won't need to check it as often but anytime they need an explanation it is there.

Banks of the digital era can no longer afford to have humans be the first line of defense in identifying risks and fraud. In earlier eras, banks could have a trained professional review each transfer and make a decision as to whether or not they believe the transaction is risky.

As the transaction volume went up, the possibility of having a human review every transaction became impossible but led to the rise of intelligent transaction analysis systems that could identify, in real-time transactions, not matching the typical behavior of the account holder.

Many banks utilize statistical models to perform real-time underwriting for loans of various types. These systems utilize sophisticated models to look at previously approved loans and determines what kinds of loans will perform and the ones that won't. These systems typically yield a single number that describes the risk of the loan and, based on risk tolerance, approves the loan or sends it to underwriting as a "soft decline" until a human underwriter can review the loan application and decide whether or not to approve the loan.

There is an entirely new generation of artificial intelligence tools that can be used to tackle problems of this nature, including Azure Machine Learning, Amazon Machine Learning, or IBM's Watson. Similar to previous generation statistical models, these systems typically yield some kind of number that, in the credit risk scenario, would equate to the probability of the loan becoming a non-performing loan.

This is where the challenge comes in. If declined loans are sent to a human underwriter for additional review, those humans need a clear explanation as to what factors concerned the machine learning model. With an explanation, they can quickly zero in on what might require further clarification. Without an explanation, they are left with reviewing every detail by hand.

Machine learning models can be deployed in a continuous learning state where the model can be re-trained on new data. As the model is re-trained, it can result in new behavior. Although this adaptability to changing conditions is a major benefit of the technology, it needs to be monitored by people who can identify emergent bad behavior.

Artificial Intelligence and Machine Learning technology are able to automate virtually every operation a bank can perform today. This power doesn't come free and it will require technology resources from your organization to integrate it into existing systems and replace the work that people are performing today. As this technology becomes central to your organization, it is absolutely critical that you are able to understand what the automation systems are doing and that those automation systems are clearly articulating to the humans that interface with them how they are making their decisions.

Excellent human interfaces are key to unlocking the power of the 4 th industrial revolution in your company!

David Talbot is the director of Architecture & Strategy at a leading digital bank. He has almost two decades of innovation across many startups and has written extensively on technology. 2016-07-30 00:00 David Talbot

The Value of Doing 22 Right: A Look at the SiriKit API Demoware

When Siri was first introduced, people thought it was much smarter than it actually is. I heard kids giggling for hours, asking it silly questions. In effect, Siri was good for executing Web searches by voice and giving sassy answers to questions about itself. Neat trick, but not very sophisticated. After a few months, most people quit using Siri because, honestly, it just wasn't that practically useful.

The Amazon Echo was widely mocked when it was introduced. Who is going to pay $200 for a speaker? It became a surprise smash hit, not because people needed another speaker but because it had an extensible API that allowed 3 rd party developers to code new capabilities for it. It quickly found multiple unserved niches, particularly in home automation. "Alexa, turn off the lights. " People who own Echos almost universally say they use it every single day and find it has become an integral part of their experience at home.

The core difference between these two experiences is the existence of an API. The Echo has thousands of 3 rd party developers thinking up new ideas for the platform and teaching it new skills, and Siri has Apple. A 3 rd party developer who wants to make their app work with Siri has no option other than to index their app and hope it comes up as a search result on a Siri voice search.

There was a brief glimmer of hope recently when Apple introduced SiriKit. Finally, Apple was going to make it possible for 3 rd party developers to integrate their apps with Siri! Not so fast, enterprising developers… SiriKit only supports about a dozen canned interactions. They support Ride Booking (for example, book an Uber), person to person payments (Send $20 to a friend on Venmo), starting and stopping a workout, and some basic carplay commands. Although this is some progress, this canned set of actions merely opens up a handful of possibilities for Siri. Apple is still a first- class citizen when it comes to integrating their own apps with Siri and the 3 rd party marketplace is relegated to 3 rd class citizens in steerage.

Many of the limitations on integration with virtual assistants boils down to privacy concerns. reads all of my messages to provide me with helpful information. I don't want every app I install on my phone to start reading my email, too.

As a result of these privacy concerns, the better virtual APIs are currently limited to being able to register your app for action commands. Actions, Cortana, and Amazon all allow you to define phrases that your application can execute on. This is a good start and it allows for a reasonable level of integration with these virtual assistant platforms.

Being able to register for context is half of the battle. The platforms with action APIs will allow you to register for a command like, "Send flowers to Mom," and activate your flower ordering app. The problem is that the app doesn't know who your Mom is even though Google does. The user's intent in this case is clearly to share your mother's name and address with the flower ordering app.

To make virtual assistants truly useful for end-users, these platforms need a way to integrate with 3 rd party applications that include context without putting people's data at risk. I would propose that this could be done by allowing apps a richer method of registering not only the action commands they can respond to but the context they need to deliver on the user's action.

For example, you could register your car insurance company as subscribed to topics about insurance, cars, and household budgeting. Within each of these topics, you would need to define the moments in natural language terms, like "If the user is in a car accident" would define the broad topic areas that are relevant to your application. If these topic areas are triggered, the virtual assistant platform could pass a pre- defined set of context information that is relevant to this experience, such as the type of car being considered for purchase. Within these topics, your application could define its more specific actions that it can handle using that general context.

Air bags deployed, insurance assistant can proactively pipe in and ask if you'd like a claims agent to meet you or, in the case of Google Now, put a card at the top of your list with a button to summon an insurance agent.

Real magic can happen if virtual assistants can start allowing 3 rd parties to collaborate together to deliver more value to the customer. For example, in a household budgeting scenario, multiple apps could collaborate to provide more information than any one company could do by themselves. For example, your bank, credit card company, wealth advisor, insurance, cable, telephone, and so forth, all have a piece of your household's budgetary picture. The problem then arises with making all of these companies behave more in the interest of the user than themselves.

Each company is incented to push themselves to the forefront. The insurance company wants to sell car insurance, the wealth management company wants you to put more money under their management, and the cable company wants you to expand your channel line-up. If you asked your assistant to help you understand your budget, each of these providers screaming at you to sign up for more services would hardly be helpful.

As a result of the need to drive this collaboration, virtual platforms will need to evolve to allow 3 rd party applications to describe the services they can perform in a situation like this. The virtual assistant can provide the appropriate context and the 3 rd party application can describe what they can do for that context. The virtual assistant then will need to make the decision as to which of the various 3 rd party applications has the most relevant input to the current need.

To create a true virtual assistant platform that can unlock the power of the entire marketplace, 3 rd party applications need:

This could potentially require more abstract reasoning than is available under the hood of the simpler assistants like Siri currently can muster. The more advanced recognition systems like Watson would have no trouble assembling these pieces. It's past time to open up virtual assistant APIs. New entrants like Viv are going to eat the lunch of these closed platforms. Truly open APIs allow a marketplace of innovation that is broader than a dozen canned possibilities to create amazing, surprising, and memorable experiences. 2016-07-30 00:00 David Talbot

23 10 Open Source Tools for Developers

According to the 2016 Future of Open Source Survey from Black Duck Software , 65 percent of organizations use open source software, and development tools are the third most common type of open source software used by businesses (after operating systems and ). As Lou Shipley, president and CEO of Black Duck notes in the report, "Simply put, open source is the way applications are developed today. " This slideshow features ten noteworthy open source development tools. It includes version control systems, integrated development environments (IDEs), text editors, and Web and mobile development frameworks. All are regularly used by developers to create new applications.

Your name/nickname

Your email

Subject

(Maximum characters: 1200). You have characters left. 2016-07-30 00:00 Cynthia Harvey

Bodhi 4.0.0 Distro Enters 24 Development, Alpha Out Now Based on Ubuntu 16.04 LTS

Bodhi 4.0.0 Alpha is right on schedule, according to Mr. Hoogland, and it marks the start of the development cycle of the upcoming GNU/Linux distribution built around the lightweight and modern Moksha desktop environment, a continuation of the Enlightenment 17 window manager.

Based on the latest technologies and software updates from the Ubuntu 16.04 LTS (Xenial Xerus) operating system, but it's in no way stable enough to be used as your daily driver. Bodhi 4.0.0 Alpha is, as its name suggests, an Alpha quality release, and it should be treated as such by those attempting to download it.

"If all goes according to plan we will have something stamped as stable before September hits," says Jeff Hoogland in the release announcement. "I would encourage anyone wanting to write a review to wait to do so until our stable release. If you are not someone who is interested in helping find issues, please wait as well. "

The final Bodhi 4.0.0 release might hit the shelves in September, but, until then, you are urged to test drive the Alpha release, as well as the next development milestone, which might very well be a Beta version, and report any issues you might find. However, please note that Bodhi 4.0.0 Alpha is only available for 64-bit computers.

Jeff Hoogland promises that the September stable release of Bodhi 4.0.0 will include support for 32-bit PAE and non-PAE platforms as well. Until the Bodhi 4.0.0 Beta is made available for public testing next month, we recommend that you download the Bodhi 4.0.0 Alpha Live ISO right now via our website. 2016-07-29 23:59 Marius Nestor

X. Org Server 1.18.4 Brings over 60 25 Improvements to GNU/Linux Operating Systems

As usual, Adam Jackson was the one to make the announcement, and it looks like X. Org Server 1.18.4 comes approximately three and a half months after the release of the previous maintenance version, X. Org Server 1.18.3, promising to add lots of backports from the devel branch, primarily in XWayland, Glamor, and Kernel Mode Setting (KMS).

However, looking at the internal changelog, we can notice that X. Org Server 1.18.4 introduces improvements for several other drivers and components, including, but not limited to, XQuartz, RandR, x86emu, XFree86, KDrive, xf86Crtc, EXA, GLX, DIX/PTraccel, XKB, as well as Xi.

Of course, we always recommend that you keep your GNU/Linux operating system up to date with the latest software releases we announce here on this website and not only, so the smartest move for your right now is to update your distribution to the X. Org Server 1.18.4 release as soon as possible.

X. Org Server 1.18.4 already landed in the main software repositories of various popular OSes, including Arch Linux and Solus, but you can also install it manually by downloading its sources right now via our website. If you're curious to know what exactly has been changed, we recommend that you check out the full changelog.

In the meantime, we recommend checking out our Linux news section for other software releases, just in case you missed some lately, especially Linux kernels, which is the most important component of a GNU/Linux operating system. Download X. Org Server 1.18.4 . 2016-07-29 23:10 Marius Nestor

AT&T users can finally 26 upgrade their Nokia Lumia 830 to Windows 10 Mobile

Today is the last day that you can upgrade your PC to Windows 10 for free , and unsupported devices can no longer use the Insider Preview to upgrade to Windows 10 Mobile , but some devices are still being approved for the upgrade. AT&T has now approved its variant of the Nokia Lumia 830. Of course, the Nokia Lumia 830 has always been an officially supported device - users of the unlocked model have been enjoying Windows 10 Mobile since March. For carrier-locked models, however, the upgrade must be approved.

This will be the third of AT&T's four devices that are on the list of officially supported Windows phones. The Lumia 640 was offered an upgrade at the beginning of June. Later in the month, Lumia 1520 users would have the opportunity. The only phone left for AT&T to upgrade now is the Microsoft Lumia 640 XL.

If you want in on Windows 10 Mobile, you'll need to opt-in through the Upgrade Advisor app. Once you do, check for updates through Settings as you would normally do. Source: AT&T via Windows Central 2016-07-29 21:54 Rich Woods

LulzSec Member Reveals 27 More Details About GCHQ Covert Operations

His report is based on a first-hand experience, as part of the LulzSec crew attacked by JTRIG, on documents leaked by Edward Snowden in 2014, and on his own research on the subject, Al-Bassam making the jump from hacker to security consultant this past March.

JTRIG's operations came to light in 2014, when The Intercept published a set of documents from the massive Edward Snowden leak.

These documents revealed the existence of JTRIG as a special unit inside GCHQ , tasked with carrying out social engineering attacks meant to infiltrate and gather intelligence on online hacktivism crews.

These documents showed that JTRIG and the GCHQ launched DDoS attacks on various IRC servers that groups like LulzSec and Anonymous used to plan operations.

Al-Bassam today revealed that JTRIG used a URL shortener for many different operations, including one to compromise a fellow hacker named P0ke.

He says the agency set up the Lurl.me URL shortening service, and used it to mask links to various sites. They used Lurl.me to hide malicious sites that delivered code that helped the GCHQ deanonymize P0ke's location and identity.

Al-Bassam says this happened in 2010. Based on his research, the Lurl.me service existed online between 2009 and 2013.

His research uncovered a plethora of tweets containing Lurl.me links. There were two main set of Twitter campaigns that involved this URL shortening service. The first took place in 2009, during the Iran elections, something confirmed by the leaked Snowden documents, which read:

“ [T]he Iran team currently aims to achieve counter-proliferation by: (1) discrediting the Iranian leadership and its nuclear programme; (2) delaying and disrupting access to materials used in the nuclear programme; (3) conducting online HUMINT; and (4) counter-censorship. ”

Al-Bassam says that JTRIG employees managed several social media accounts that were tweeting out dissident material to discredit Iran's leadership, all using the Lurl.me service.

The @2009iranfree account was the most active. The account still exists today but has ceased any activity in 2009, with Lurl.me links still visible on its timeline at the time of writing.

After trying to influence public opinion in Iran using Twitter, JTRIG seems to have stopped using lurl.me for an entire year, with no lurl.me links being spotted in 2010 at all. Al-Bassam says that new links popped up all of a sudden in 2011, just in time for the Arab Spring demonstrations in Syria. This time, most of the tweets came from @access4syria , a very active account.

"The account was only active between May and June 2011, and only tweeted between 9 AM and 5 PM UK time on Monday to Friday," Al-Bassam notices. This time around, agents also used Blogspot to run as well, JTRIG taking a position against the Assad regime.

In both campaigns, the tweets containing Lurl.me links were advertised as a way to read materials blocked by the regime. In both campaigns, users engaged with the accounts, and many quoted or retweeted the links. The tweets on this account kept promoting a Blogspot article that advertised two proxies for Syrians to use and access the Internet in case it was blocked.

Al-Bassam makes the connection between these proxies and the GCHQ MOLTEN-MAGMA hacking tool , a CGI HTTP Proxy with the ability to log all traffic and perform HTTPS MitM attacks, snooping on encrypted traffic.

Curiously, the Lurl.me URL shortener shut down in 2013 after Snowden started leaking all sorts of NSA and GCHQ documents. 2016-07-29 21:00 Catalin Cimpanu

Microsoft reiterates its 28 response to Tim Sweeney's attack on UWP

Earlier this week, an interview with Tim Sweeney in Epic magazine was published, in which the Gears of War developer once again expressed his skepticism of the Universal Windows Platform.

Sweeney went a bit further than he did back in March when he made his original accusations. Back then, he said that UWP is a walled garden, that it forces users to get their apps from the Windows Store. Of course, this wasn't true, and Microsoft responded with such a message. This time, the Epic Games co-founder said that Microsoft will gradually phase out Win32, and once it does, it's a small step to force everyone to get their apps from the Windows Store. He also stated that over the next five years, Microsoft will release a series of updates that will make Steam "worse and more broken", until "people are so fed up that Steam is buggy that the Windows Store seem like an ideal alternative. "

Microsoft responded with the following statement (via GameInformer ):

The company clearly took the high road, sticking to facts instead of commenting on the conspiracy theory that it will basically break Win32. It's a similar statement to the one that was released back in March, which was tweeted by Xbox head Phil Spencer.

Source: GameInformer via WinBeta | Image via Gameranx 2016-07-29 20:10 Rich Woods

The future of Node.js: 29 Stable, secure, everywhere

Server-side JavaScript platform Node.js remains on the rise in enterprise IT, as its usage has been doubling every year for four years now, according to the Node.js Foundation. Now, developers overseeing Node's future are mapping out future priorities like stability and security, and they're exploring threaded workloads and engagement with the JavaScript language itself.

Looking to spread Node "everywhere," proponents are pushing for increased adoption across servers and the desktop as well as the internet of things, according to a presentation by Rod Vagg, member of the Node Technical Steering Committee, at this week's Node Summit conference in San Francisco.

Node developers endeavor to improve stability of Node releases, Vagg said. "Unfortunately, this branch that we have called 'current' -- we used to call it stable -- is not quite as stable as we'd like to be," he said. Too many breakages and regressions have been slipping into releases, he said, and while the situation is not "terrible," it needs improvement.

Planned language enhancements include Zones, which make it easier to write asynchronous code. Threaded workloads, meanwhile, could be implemented in Node akin to browsers supporting Web Workers, which run web content in scripts in background threads.

Proponents also want to improve the relationship between Node and the ECMA TC39 committee, which develops the ECMAScript specifications underlying JavaScript. For example, Node could accommodate ECMAScript's promises capability, which helps asynchronous communications. "There are some ways that promises work that work against the way that some people use Node," Vagg noted. Node could also implement low-level JavaScript features like tail calls , which could impact debugging. Node also may see a shifting philosophy related to its use of HTTP, including adherence to HTTP/2. New APIs may be needed for HTTP/2, though, and Node's loose approach to HTTP has been a source of security issues.

To further address security, Node's developers want to clarify and have strict adherence to security policies for Node while supporting a growing ecosystem of security service providers, including Lift, Node Security, and Snyk. Node has faced security issues like denial-of-service and out-of- bounds vulnerabilities recently.

Plans also call for more rapid upgrades of the V8 engine underlying Node while maintaining ABI stability, Vagg said. Also, to provide add-on stability and multivirtual machine experimentation, Node proponents are exploring development of a new C++ API compatibility layer. Google's V8 JavaScript engine currently is the VM of choice for Node, but Microsoft hopes to change that with a planned standard interface.

More about ECMA Google Microsoft Zones 2016-07-29 20:00 www.computerworld

Microsoft adds an API 30 gallery in the Windows App Studio July update

The July 2016 update for the Windows App Studio is now available, and it includes a number of new features. You might recall that last month, Microsoft introduced REST API support for the service. Some of today's features build on top of that.

Here's a summary of what's new today:

If the REST API data source was a bit too confusing for you (after all, App Studio is made for people with minimal developer skills), that's where the API gallery comes in. The gallery will aggregate different implementations of the REST API source, which are submitted by users of the service.

For example, the REST API data source is completely open. If you find one that you want to use, implement it, and it works, you can share it with the rest (pun intended) of the community.

If you've ever thought about making a Windows app but don't have the developer skills - or even the time - you might want to consider giving Windows App Studio a try. They continue to add new features every month or two.

Source: Windows Blog 2016-07-29 19:40 Rich Woods

Alleged Render of Google 31 Nexus Smartwatches Angelfish and Swordfish Leaks

Rumors say that Google is working on creating two Nexus smartwatches and renders of the two devices have leaked online. The image surfaced at Kitguru , but there's no confirmation that these are indeed the two Nexus smartwatches that were rumored a while back.

The two smartwatches reportedly bare the code names Angelfish and Swordfish, which would make sense since code names for Nexus devices tend to be fish names. The dimensions of the two smartwatches were revealed recently.

Nexus Angelfish smartwatch is said to be 0.55 in (14mm) thick and measure 1.71 inches (43.5mm) in diameter. Angelfish will be the high-end model, as it will have a larger display and a heart rate monitor. Rumors say that it will also come with LTE and GPS integrated. The smartwatch could have the potential of being used as a standalone device and have apps integrated in Android Wear 2.0. In May, Google announced standalone Wear apps at Google I/O with Wear 2.0.

Next in line is Swordfish, which will measure1.65in (42mm) in diameter and 0.41in (10.6mm) in thickness. It will also come with basic notification and communication features. The smartwatch appears to have a single button centered on the right-side of the body and the center of the button seems to be made out of polished metal.

Rumors point to the fact that the Nexus Swordfish smartwatch could be made available in silver, titanium and rose gold colors. The smartwatch will be compatible with Google MODE bands, but Angelfish won't have this flexibility.

Nexus Angelfish and Swordfish could be released this fall and the two smartwatches are expected to come with Android Wear 2.0. 2016-07-29 19:22 Alexandra Vaidos

Forget the deadline, you'll 32 still be able to get Windows 10 for free, and this is how

Today, July 29, is the last day for users of Windows 7 and 8.1 to upgrade free of charge to Windows 10. If you don’t take up the offer in time, and you decide you do want to upgrade after all, the only option will be to buy a copy of the OS.

But hold on. That’s not entirely true. If you want Windows 10 after the deadline has expired, you’ll still be able to get it for free, legally, and doing so couldn’t be easier.

Three months ago, Microsoft announced that customers with accessibility needs who use assistive technologies would be able to continue to get hold of Windows 10 for free, and today the company launches an upgrade site to make that possible.

The new site explains that:

To upgrade for free, all you have to do is click the Upgrade Now button. By doing so you are confirming that you use assistive technologies, but Microsoft isn’t asking for proof.

Now, let's address the elephant in the room here. By doing this you will, essentially, be lying about having a handicap in order to blag a free copy of Windows 10. If you're fine with that, then click away. However, if doing so will make you feel bad, or go against your personal moral code, then you should purchase a copy instead. We're not recommending you lie, we're just pointing out that this site exists, and can be used to get Windows 10 for free. Are we clear? Good.

Once you click the button the Windows 10 Update Assistant will download and you can start the upgrade process.

The upgrade offer extension isn’t indefinite, and there’s no word -- at the moment -- when the extension will end. Microsoft says "We have not announced an end date of the free upgrade offer for customers using assistive technology. We will make a public announcement prior to ending the offer".

As no doubt a lot of people who don’t use assistive technologies will take advantage of this offer, it’s possible Microsoft may make the decision to end the extension relatively early. That said, the software giant is still keen to build up usage numbers for its operating system, and customers getting Windows 10 by a sneaky means after the deadline has passed are still customers after all. With that in mind, I wouldn't be surprised if Microsoft allows the offer to run for some time yet...

Photo Credit: murielbuzz / Shutterstock 2016-07-29 19:03 By Wayne

Kyocera DuraForce Pro 33 Might Arrive at T-Mobile in Q4

The image leaked over at TechnoBuffalo shows that the smartphone will follow design guidelines in the series and come with a rugged case to protect it from scratches and potential drops. Kyocera DuraForce Pro might also come with a dual-camera setup on the back, as the leaked image would indicate.

The upcoming Kyocera DuraForce Pro might come with IP68 certification which provides the smartphone with protection from water, dust, drops, solar radiation, thermal shock, salt and humidity, so that it would function in extreme conditions and the phone's components wouldn't be affected. In fact, Kyocera's Digno rafre smartphone had a back cover with the capacity to self-heal from micor scratches.

The post also reveals that one of the rear-facing cameras on the smartphone was created for underwater photography, allowing for wide-angle images and videos to the taken. Moreover, the second camera module is said to come with 13MP capacity. In addition, the smartphone will have a front-facing camera with a resolution of 5MP.

Kyocera DuraForce Pro could draw power from a 3,420mAh battery. The smartphone appears to run an Android version that has an UI that's very similar to stock Android, which means that the company didn't make many changes to the phone's . The post mentions that the smartphone could come with a fingerprint sensor, but it isn't visible in the image.

There's no information on the smartphone's price, but the previously launched smartphone in the series, the DuraForce XD came with a price of $449.99 off contract at T-Mobile. The new phone could have a similar price, but that also depends on its other specs. 2016-07-29 18:54 Alexandra Vaidos

34 Famous American blogger strikes back against China

A few weeks ago I published a column here about online journalism. You may remember it from the picture of Jerry Seinfeld which I am using again here. While I have many readers in China, my work isn’t normally distributed there so I was surprised when a reader told me that column had been translated almost in its entirety and republished on a Chinese web site. How should I feel about this?

I might be flattered or I might be angry. Certainly the translation was not authorized by me and I received no payment for it. It goes far beyond the 250 word excerpt that is the day-to-day definition of Fair Use so it is a copyright violation. But the worst part, if is to be believed, is that it doesn’t represent very well the ideas I was trying to present. Yet, having used my name and attributed the work to me, they are claiming this is what I wrote.

Is it? Not willing to accept Google Translate, I’d like one or more of my regular Chinese readers to have a look at the Chinese column and let us know your opinion of the translation. Please share your comments below.

Did I really write this? " Online news is rubbish", saying it is easy, but there will always be some "junk news", before we call the "fast-food news". Previous "news fast food" and are now "junk news" The only difference is that the latter now joined analyzed".

I know my writing is not easy to translate. I used to write a column for ASCII Magazine in Japan and my translator there said I gave her the most difficulty. My only other experience of being published in China (other than the Mandarin and Cantonese translations of my book Accidental Empires ) was actually writing English, though in China. I spent the summer of 1982 working in Beijing as an editor at China Daily , back then China’s only English language newspaper.

If you have visited Beijing this century you wouldn’t recognize the Beijing of 1982. I was convinced for awhile that I was the tallest man in China.

The reporters who worked for me had been my journalism students the year before at Stanford University. China was wary of allowing western publications into the country yet felt the need to supply news to tourists (there weren’t many) and business travelers, so China Daily was born. The founding editor, who was a Chinese native, had graduated in the same class as my mother from the University of Missouri in 1944.

Most of my Chinese students had been imprisoned during the Cultural Revolution. Some of them had to first build their prisons in the countryside then stayed in them for up to nine years. After that period ended with the fall of the Gang of Four, my students reentered society, eventually becoming teachers of English. Since there was no journalistic tradition in Communist China comparable to the West, the Chinese government decided to turn English teachers into journalists.

They were all very smart and journalism was easy for my Chinese students once I got them to overcome their fear of offending. Since we were writing for an English-speaking audience I also had to fight against a Chinese style they likened to a spiral that kind of wandered into the story leaving almost all details for the very end -- the exact opposite of western news writing.

What my Chinese students found to be a real challenge was learning to drive. Despite their Stanford education, what they all valued far more was returning to China with a California driver’s license. At that time nearly all Chinese drivers were in the military so to be a civilian who could drive a car, well that was a big deal -- far bigger than being a reporter.

Somehow I doubt that news.163.com (the Chinese web site) will translate this column. 2016-07-29 18:52 By Robert

Security pros find it hard 35 to measure ROI on spending

The majority of IT security experts actually struggle to measure the return on investment in security measures, Tenable Network Security says.

Based on a survey of 250 IT security professionals, conducted during the Infosecurity Europe 2016 summit, it says that the majority can only measure the return on less than 25 percent of their security spend.

What’s more, just 17 percent were confident their investments were being distributed properly.

"It’s undisputed that security is one of the top priorities for organizations across the globe", says Gavin Millard, EMEA technical director, Tenable Network Security. "However, our research revealed that many organizations struggle to accurately measure the return on IT investment and have little confidence that the money is being used effectively. This lack of accountability creates a gap between the security team and the c- suite, leaving the organization vulnerable".

"The security team needs to understand the business needs of the organization, define and map security requirements based on those needs, collect relevant metrics and measure their success", says Millard. "This is one of the best ways to not only demonstrate the value of IT, but also ensure security across the entire IT environment".

Tenable also asked 33 security experts how they justify their security programs to business executives and the boardroom. Collected recommendations, as well as best practices, can be found in the Using Security Metrics to Drive Action ebook.

Published under license from ITProPortal.com, a Net Communities Ltd Publication. All rights reserved.

Photo Credit: xavier gallego morell / Shutterstock 2016-07-29 18:51 By Sead Total 35 articles. Generated at 2016-07-30 18:01