Authored by Bharti Amlani
Google is now presenting a new chip that will make the machine learning faster. The company is ready to take a big leap forward with the speed of its machine learning systems as it has developed its own custom chip and applying it for over a year.
There were rumors in the industry that under the wraps, Google engineering team was working hard for designing its own chip. And the job ads posted by the company added further air to them. And now, the company has lifted the curtains and revealed the secret to the tech world.
Known as Tensor Processing Unit or TPU, the chip has been named after the Tensor Flow software which is being used by the company for its machine learning programs. Norm Jouppi, a Google engineer and blogger, has referred it as an accelerator chip (a chip that is designed to speed up specific tasks)
Sunder Pichai, CEO Google says, “the TPU provides an order of magnitude better performance per watt than existing chips for machine learning tasks. It’s not going to replace CPUs and GPUs, but it can speed up machine learning processes without consuming a lot more energy.”
Today machine learning system are being demanded for all types of applications, such as language translation, voice recognition, data analytics and so on. If a company wants to maintain its pace of innovations and advancements, it is very essential to have a chip that speeds all these kind of workloads. Google is highly excited and hopeful about the efficiency of the chip and expecting that the chip will serve for three generations or seven years.
The TPU is in production use across Google’s cloud, including powering the RankBrain search result sorting system and Google’s voice recognition services. When developers pay to use the Google Voice Recognition Service, they’re using its TPUs. Urs Hölzle, Google’s senior vice president for technical infrastructure, said “during a press conference at I/O that the TPU can augment machine learning processes, but that there are still functions that require CPUs and GPUs.”
He further mentioned that Google started to develop TPU about two years ago. At present, the company has millions of chips in use. They can be easily fitted in the same slot used for hard drives in Google’s data center racks. As the result, if it is necessary, the company can easily deploy more of them. However , Hölzle says that they don’t need to have a TPU in every rack just yet. On a question about the possibility of selling TPU as a standalone hardware, Diane Greene Google enterprise chief said that the company isn’t planning to sell them for other companies to use.
Part of that has to do with the way application development is heading — developers are building more and more applications in the cloud only, and don’t want to worry about managing hardware configurations, maintenance and updates. The other possible reason for Google’s denial for selling the chip can be its unwillingness to give its rivals access to the chips as it has invested a lot of time, money and efforts for the development of the chip.
The way and object that the TPU can be best used for is still a matter of discussions. The tech market analysts are expecting that the chip can be used for inferencing, a part of machine learning operations that doesn’t require as much flexibility.
Right now, that’s all Google is saying. We still don’t know which chip manufacturer is building the silicon for Google. Holzle said that the company will reveal more about the chip in a paper to be released very soon.