2 mins read

What are leading tech companies’ strategy concerning next-Gen hardware for AI

Attaining the tech excellence and developing advanced system of AI are the most vital part of the game plan of almost all the major IT ffplayers. Artificial Intelligence or machine learning is being considered as a way to reinvent enterprise IT mainstays and for the companies to securing the top most position in the market.

IBM, Intel, Google and Microsoft are completely ready to meet the challenges for the upcoming time. In this article are going to see how the four tech giants are working on hardware to complement and accelerate the software side of machine learning and artificial intelligence. All the four companies are the most familiar and reputed names in technology. Each of them is investing a huge amount in ML or AI with hardware designs intended to greatly accelerate the next generation of applications. Here’s how each of their plans stack up.

Intel
The ongoing plans: The best chip maker in the world has recently launched Knight Mills: a new line of CPUs that is specifically aimed at ML applications. The company has occasionally mentioned its plans to meld its CPUs with a re-programmable FPGA processor, a smart and efficient yet relevantly under -exploited technology for Intel.

The reasons behind: As the PC market is continuously losing its charm, Intel has been searching for methods to make up the difference. The task cannot be done by the server alone, that’s why Intel has widened its quest to include both main processors and co-processors designed to accelerate ML functions.

But, there are lesser possibilities that Intel will offer its own GPU for ML work. Still, Intel has not reached to the level of other processor makers, but the experts believe that its CPU-specified improvements are capable enough to beat GPUs. Undoubtedly, Intel would prefer to create an environment where is CPUs alone power the future and without any requirement of being mixed with any other company’s GPUs.

Microsoft
The ongoing plans: After outfitting the Microsoft Azure cloud with specially designed FPGAs to add machine learning accelerated functions to its clusters, Microsoft is talking about allowing customers to program the devices directly to enable more powerful tools for machine learning in its cloud.

The reasons behind: Microsoft is already engaged in providing ML/AI tools as both inside or outside of Azure. But here it is needed to mention that most of the OpenAI or the nonprofit AI work is done by using Azure as its cloud provider. And now the company is thinking about a rather new methodology for providing cloud customers with hardware for machine learning. As FPGAs are the complex to program and not as well-understood for ML as GPUs yet, so the hardest part is yet to be made.

Google
The ongoing plans: Since a long time, Google is investing a long time and hard efforts in machine learning on the software side with frameworks like Tenserflow. The company is now providing a hardware complement-the Tensor Processing unit that helps acceleration of specific machine learning functions.

The reason behind: Following Microsoft’s footprints, Google is on the way of developing its cloud as a premiere destination for ML applications. Google has clearly indicated its intentions that it want to stand out with a superb ease of use, and it is the reason that the company is considering the low-level access to ML hardware that Microsoft is contemplating. For those who want the direct access to machine learning hardware in a familiar context, to help them there is always Google Cloud’s brand new GPU instances. Odds are the two hardware offerings will work in conjunction.

IBM
The ongoing plans: PowerAI is the latest machine learning toolset by IBM. It runs a mix of IBM’s Power processor and Nvidia GPUs wired together while using new proprietary hardware designed to tie CPUs and GPUs as closely as possible.

The reason behind: machine learning is not any new concept for the company. Watson is its household ML/AI project. But Watson program was developed and provided basically as a set of black-box services. On the other hand, PowerAI is not a specific processor or GPU, but a hardware suite targeting at high-end customers who want the capabilities for themselves and total control over how to use it. It is in line with IBM’s plans for the Power processor line, which revolves around the bib data and cloud apps that machine learning workloads are applied to.