News & Events

Items filtered by date: June 2014

Google is using machine learning and artificial intelligence to wring even more efficiency out of its mighty data centers.

In a presentation today at Data Centers Europe 2014, Google’s Joe Kava said the company has begun using a neural network to analyze the oceans of data it collects about its server farms and to recommend ways to improve them. Kava is the Internet giant’s vice president of data centers.

In effect, Google has built a computer that knows more about its data centers than even the company’s engineers. The humans remain in charge, but Kava said the use of neural networks will allow Google to reach new frontiers in efficiency in its server farms, moving beyond what its engineers can see and analyze.

Google already operates some of the most efficient data centers on earth. Using artificial intelligence will allow Google to peer into the future and model how its data centers will perform in thousands of scenarios.

In early usage, the neural network has been able to predict Google’s Power Usage Effectiveness with 99.6 percent accuracy. Its recommendations have led to efficiency gains that appear small, but can lead to major cost savings when applied across a data center housing tens of thousands of servers.

Why turn to machine learning and neural networks? The primary reason is the growing complexity of data centers, a challenge for Google, which uses sensors to collect hundreds of millions of data points about its infrastructure and its energy use.

“In a dynamic environment like a data center, it can be difficult for humans to see how all of the variables interact with each other,” said Kava. “We’ve been at this (data center optimization) for a long time. All of the obvious best practices have already been implemented, and you really have to look beyond that.”

Enter Google’s ‘Boy Genius’

Google’s neural network was created by Jim Gao, an engineer whose colleagues have given him the nickname “Boy Genius” for his prowess analyzing large datasets. Gao had been doing cooling analysis using computational fluid dynamics, which uses monitoring data to create a 3D model of airflow within a server room.

Gao thought it was possible to create a model that tracks a broader set of variables, including IT load, weather conditions, and the operations of the cooling towers, water pumps and heat exchangers that keep Google’s servers cool.

“One thing computers are good at is seeing the underlying story in the data, so Jim took the information we gather in the course of our daily operations and ran it through a model to help make sense of complex interactions that his team – being mere mortals – may not otherwise have noticed,” Kava said in a blog post. “After some trial and error, Jim’s models are now 99.6 percent accurate in predicting PUE. This means he can use the models to come up with new ways to squeeze more efficiency out of our operations. ”

How it Works

Gao began working on the machine learning initiative as a “20 percent project,” a Google tradition of allowing employees to spend a chunk of their work time exploring innovations beyond their specific work duties. Gao wasn’t yet an expert in artificial intelligence. To learn the fine points of machine learning, he took a course from Stanford University Professor Andrew Ng.

Neural networks mimic how the human brain works, allowing computers to adapt and “learn” tasks without being explicitly programmed for them. Google’s search engine is often cited as an example of this type of machine learning, which is also a key research focus at the company.

“The model is nothing more than series of differential calculus equations,” Kava explained. “But you need to understand the math. The model begins to learn about the interactions between these variables.”

Gao’s first task was crunching the numbers to identify the factors that had the largest impact on energy efficiency of Google’s data centers, as measured by PUE. He narrowed the list down to 19 variables and then designed the neural network, a machine learning system that can analyze large datasets to recognize patterns.

“The sheer number of possible equipment combinations and their setpoint values makes it difficult to determine where the optimal efficiency lies,” Gao writes in the white paper on his initiative. “In a live DC, it is possible to meet the target setpoints through many possible combinations of hardware (mechanical and electrical equipment) and software (control strategies and setpoints). Testing each and every feature combination to maximize efficiency would be unfeasible given time constraints, frequent fluctuations in the IT load and weather conditions, as well as the need to maintain a stable DC environment.”

Runs On a Single Server

As for hardware, the machine learning doesn’t require unusual computing horsepower, according to Kava, who says it runs on a single server and could even work on a high-end desktop.

The system was put to work inside several Google data centers. The machine learning tool was able to suggest several changes that yield incremental improvements in PUE, including refinements in data center load migrations during upgrades of power infrastructure, and small changes in the water temperature across several components of the chiller system.

“Actual testing on Google (data centers) indicates that machine learning is an effective method of using existing sensor data to model DC energy efficiency and can yield significant cost savings,” Gao writes.

The Machines aren’t Taking Over

Kava said that the tool may help Google run simulations and refine future designs. But not to worry — Google’s data centers won’t become self-aware anytime soon. While the company is keen on automation, and has recently been acquiring robotics firms, the new machine learning tools won’t be taking over the management of any of its data centers.

“You still need humans to make good judgments about these things,” said Kava. “I still want our engineers to review the recommendations.”

The neural networks’ biggest benefits may be seen in the way Google builds its server farms in years to come. “I can envision using this during the data center design cycle,” said Kava. “You can use it as a forward-looking tool to test design changes and innovations. I know that we’re going to find more use cases.”

Google is sharing its approach to machine learning in Gao’s white paper, in the hopes that other hyper scale data center operators may be able to develop similar tools.

“This isn’t something that only Google or only Jim Gao can do,” said Kava. “I would love to see this type of analysis tool used more widely. I think the industry can benefit from it. It’s a great tool for being as efficient as possible.”

 
Published in Technology

Adobe is continuing its full-court press to convince photographers to move to its Creative Cloud subscription-based licensing model. Today’s announcement of Creative Cloud 2014 marks its biggest effort yet. New features in Photoshop, lots of new mobile goodies, and a permanent discounted subscription for photographers were highlighted by Adobe as it rolled out its newly branded 2014 edition of its Creative Suite for the Cloud.

Photoshop 2014: Path-based blurs and focus-based selection are headline features

While Adobe plans to continue to roll out incremental improvements to its Suite as they are ready, it has decided to provide annual milestone releases to make it easier for plug-in developers to have known release numbers for testing. Today’s CC 2014 launch features updates to all 14 Creative Suite applications, but two new features in Photoshop CC will be of the most interest to photographers — path-based blurs and focus-based selections.

Adobe has previously provided a variety of tools for creative blurring on an image, including for simulating motion, but Photoshop 2014 takes the capability to a new level. Motion blurs can be made along a line, a radius, or just about any path that can be constructed using Photoshop’s curve construction tools. So in addition to simple motion, like a vehicle in a straight line, it is possible to mimic spinning wheels or even a vehicle in a swerving path.

Creating selections based on focus is also new in Photoshop 2014. You can tell Photoshop to only select areas that are in focus, and use that selection to create a mask for other commands. The magic is far from perfect, so you can further refine the selection using the usual set of Adobe tools, of course. This worked quite well for the demo images Adobe chose — that featured an in-focus subject in the foreground with a distant and out-of-focus background. We’ll see how accurate it is with real world images now that the production version has been released.

Photoshop 2014 now has “experimental features,” including touch and high-dpi support for Windows

With this release of Photoshop CC, Adobe is also providing an experimental features capability. Users will be able to selectively activate features that otherwise would not have made it into the product. The most exciting of these for Windows users are support for high-dpi displays and for touch gestures. The high-dpi support scales user interface elements by 200%, which will make Photoshop a lot less painful to use on high-resolution laptops and tablets. Touch gesture support includes standard Windows 8 gestures like pinch to zoom, and the new version offers improved stylus support.

Creative Cloud subscribers with an iPhone or iPad will also benefit from a new capability to manage their Adobe assets from their mobile device, using Adobe’s Creative Cloud app for iOS. All these goodies are available for immediate download from Adobe, or by using the integrated Update capability in your Creative Cloud applications.

Adobe is pushing hard into mobile — as long as you own an iPad

Adobe continued to expand its family of Apple-centric mobile products. On the heels of its Lightroom mobile announcement, today brought two new drawing applications for recent iPads — Line and Sketch — along with supporting hardware: The Ink stylus and Slide digital ruler. New application Photoshop Mix provides a more casual interface to image editing than Adobe’s existing Photoshop Touch for mobile. It too is iPad-only, and for Creative Cloud subscribers can even tie into Adobe servers to apply sophisticated effects like content-aware fill. Mix can work with any image (except for Raw files) from your iPad’s Camera Roll or from your Adobe cloud. It can even open individual layers from PSD files if needed. In the meantime, Lightroom mobile adds support for iPhones from the 4S onward, along with star ratings for images.
 

hotographer subscriptions are here to stay

Adobe has finally settled on a permanent subscription plan for photographers. For $10 per month you get Photoshop CC, plus the current version of Lightroom, and a measly 2GB of cloud storage. The requirement of owning a previous Adobe product has been dropped, but so has the capability of creating a Behance website. All in all, $120 a year for both Photoshop and Lightroom is well worth it for anyone who uses either application heavily. Frankly, you may not have a choice if you shoot Raw, as it will only be a matter of time before you need the latest version to get support for a new camera.

This plan addresses the cost issue quite well, but doesn’t solve the problem of having images locked up in an Adobe proprietary format that will require paying them forever to retain access. Adobe still needs to come up with a more coherent strategy for how photographers can gracefully “withdraw” from the Creative Treadmill if and when they become less active but still want to be able to access their images.

Adobe pushing to become a cloud-based platform

Under the covers of Adobe’s new mobile tools is a yet-to-be-released Creative SDK. Adobe claims it will allow developers to harness the power of the Creative Suite (assuming customers have a subscription) from mobile devices in their own applications. Photoshop Mix is designed to be a showcase for what is possible using the Creative SDK — like allowing iPad users to run sophisticated image filters on Adobe’s servers while seeing the results right on their tablet.

Putting together all of Adobe’s initiatives it is clear that it wants to extend its creative toolset dominance to the cloud — and to become as successful as a platform company as it has been as a tools company. Clearly its mobile apps are a great start, and if the Creative SDK is well-received it could gain traction with the broader community of content creators. Adobe still needs to figure out the storage side of the equation. The ridiculously small amount of storage provided with its subscriptions is nearly useless for serious work, and pales in comparison to the sizable offerings from other cloud vendors. It will also have a hard time getting developer attention if only paying Creative Cloud subscribers can get the full benefit of the Creative SDK. Most mobile app developers aim at the broadest audience possible, so Adobe is likely to need to do some more tuning on how, and by whom, it allows its cloud to be accessed.

 

 
Published in Technology
 
SAN FRANCISCO: For decades, medical technology firms have searched for ways to let diabetics check blood sugar easily, with scant success. Now, the world\\\\\\\'s largest mobile technology firms are getting in on the act. 

Apple, Samsung Electronics and Google, searching for applications that could turn nascent wearable technology like smartwatches and bracelets from curiosities into must-have items, have all set their sites on monitoring blood sugar, several people familiar with the plans say. 

These firms are variously hiring medical scientists and engineers, asking US regulators about oversight and developing glucose-measuring features in future wearable devices, the sources said. 

The first round of technology may be limited, but eventually the companies could compete in a global blood-sugar tracking market worth over $12 billion by 2017, according to research firm GlobalData.

Diabetes afflicts 29 million Americans and costs the economy some $245 billion in 2012, a 41% rise in five years. Many diabetics prick their fingers as much as 10 times daily in order to check levels of a type of sugar called glucose. 

Non-invasive technology could take many forms. Electricity or ultrasound could pull glucose through the skin for measurement, for instance, or a light could be shined through the skin so that a spectroscope could measure for indications of glucose.

All the biggies want glucose on their phone, said John Smith, former chief scientific officer of Johnson & Johnson\\\'s LifeScan, which makes blood glucose monitoring supplies. Get it right, and there\\\'s an enormous payoff.

Apple, Google and Samsung declined to comment, but Courtney Lias, director at the US Food and Drug Administration\\\'s chemistry and toxicology devices division, told Reuters a marriage between mobile devices and glucose-sensing is made in heaven. 

In a December meeting with Apple executives, the FDA described how it may regulate a glucometer that measures blood sugar, according to an FDA summary of the discussion. 

Such a device could avoid regulation if used for nutrition, but if marketed to diabetics, it likely would be regulated as a medical device, according to the summary, first reported by the Apple Toolbox blog.

The tech companies are likely to start off focusing on non-medical applications, such as fitness and education. 

Even an educational device would need a breakthrough from current technology, though, and some in the medical industry say the tech firms, new to the medical world, don\\\\\\\'t understand the core challenges.

There is a cemetery full of efforts to measure glucose in a non-invasive way, said DexCom chief executive Terrance Gregg, whose firm is known for minimally invasive techniques. To succeed would require several hundred million dollars or even a billion dollars, he said. 

Poaching
Silicon Valley is already opening its vast wallet. 

Medtronic senior vice president of Medicine and Technology Stephen Oesterle recently said he now considers Google to be the medical device firm\\\'s next great rival, thanks to its funding for research and development, or R&D. 
We spend $1.5 billion a year on R&D at Medtronic — and it\\\'s mostly D, he told the audience at a recent conference. Google is spending $8 billion a year on R&D and, as far as I can tell, it\\\'s mostly R. 

Google has been public about some of its plans: it has developed a smart contact lens that measures glucose. In a blog post detailing plans for its smart contact lens, Google described an LED system that could warn of high or low blood sugar by flashing tiny lights. It has recently said it is looking for partners to bring the lens to market.

The device, which uses tiny chips and sensors that resemble bits of glitter to measure glucose levels in tears, is expected to be years away from commercial development, and skeptics wonder if it will ever be ready. 

Previous attempts at accurate non-invasive measurement have been foiled by body movement, and fluctuations in hydration and temperature. Tears also have lower concentrations of glucose, which are harder to track. 

But the Life Sciences team in charge of the lens and other related research is housed at the Google X facility, where it works on major breakthroughs such as the self-driving car, a former employee who requested anonymity said. 

Apple\\\\\\\'s efforts center on its iWatch, which is on track to ship in October, three sources at leading supply chain firms told Reuters. It is not clear whether the initial release will incorporate glucose-tracking sensors. 

Still, Apple has poached executives and bio-sensor engineers from such medical technology firms as Masimo, Vital Connect, and the now-defunct glucose monitoring startup C8 Medisensors. 

It has scooped up many of the most talented people with glucose-sensing expertise, said George Palikaras, CEO of Mediwise, a startup that hopes to measure blood sugar levels beneath the skin\\\\\\\'s surface by transmitting radio waves through a section of the human body. 

The tech companies are also drawing mainstream interest to the field, he said. When Google announced its smart contact lens, that was one of the best days of my career. We started getting a ton of emails, Palikaras said. 

Samsung was among the first tech companies to produce a smartwatch, which failed to catch on widely. It since has introduced a platform for mobile health, called Simband, which could be used on smart wrist bands and other mobile devices. 

Samsung is looking for partners and will allow developers to try out different sensors and software. One Samsung employee, who declined to be named, said the company expects to foster noninvasive glucose monitoring. 

Sources said Samsung is working with startups to implement a traffic light system in future Galaxy Gear smartwatches that flashes blood-sugar warnings. 

Samsung Ventures has made a number of investments in the field, including in Glooko, a startup that helps physicians access their patients\\\' glucose readings, and in an Israeli glucose monitoring startup through its $50 million Digital Health Fund. 

Ted Driscoll, a health investor with Claremont Creek Ventures, told Reuters he\\\'s heard pitches from potentially promising glucose monitoring startups, over a dozen in recent memory. 

Software developers say they hope to incorporate blood glucose data into health apps, which is of particular interest to athletes and health-conscious users. 

We\\\'re paying close attention to research around how sugar impacts weight loss, said Mike Lee, cofounder of MyFitnessPal. 

After decades of false starts, many medical scientists are confident about a breakthrough on glucose monitoring. Processing power allows quick testing of complex ideas, and the miniaturization of sensors, the low cost of electronics, and the rapid proliferation of mobile devices have given rise to new opportunities. 

One optimist is Jay Subhash, a recently-departed senior product manager for Samsung Electronics.  I wouldn\\\'t be at all surprised to see it one of these days, he said.

 

Published in Technology
Monday, 23 June 2014 07:19

July 2014 Intake

Published in Blog

What is A+ certification?

A+ certification is an industry-wide, vendor neutral competency test for pc repair technicians.

What types of jobs are available with A+?

A partial list includes: PC repair technician, help desk analyst, desktop support specialist, computer specialist. Additionally many non-IT companies require A+ for such jobs as cable installers, postal equipment installation and repair and telecommunications installers

Published in Blog

Contact us

SNIT Business School,

Old Moka Road,

Bell Village,

Mauritius

P: (+230) 211 10 92

P: (+230) 213 01 95

Email: Info@snitedu.com

JoomShaper