Intel’s Kaby Lake is the new hot favorite by the company, but still it is long to be done with the previous generation Skylake. By the mid 2017, Intel is planning to launch its new Xeon server chips based on Skylake and according to Barry Davis, general manager for the accelerated workload group at Intel, “they will boast big performance increase”. The Skylake Xeon chips will go into mainstream servers and could spark a big round of hardware upgrades, Davis said.
Xeon chips are extremely popular, still not as visible as Intel’s PC chips. Companies like Google, Facebook, and Amazon buy thousands of servers loaded with Xeon chips to power their search, social networking, and artificial intelligence tasks. These chips are specifically used in high end engineering and virtual reality applications in leading workstations such as HP’s Z840 and Apple’s Mac Pro.
The projected chips are said to be the successors of the Broadwell architecture based Xeon E5 v4 chips, launched early this year. These chips had up to 22 cores, while the Skylake Xeon may have more. The launch of the chips is actually taking place with a slight delay due to some reasons. Critics believe that Intel is concentrated over its entrance to the machine learning sector (and it makes the company re-set their ongoing manner of data analyzing and solving problems). The move toward machine learning is even driving changes in server configurations, with more customers buying servers with graphics processors.
Most of the recent Xeon E5s have shipped to cloud customers, which buy a large variety of Xeon chips, driving chip prices up, said Dean McCarron, principal analyst at Mercury Research. The real intention behind Intel’s recent preference over Skylake Xeon is due to AI, which demands a different kind of workload, compared to serial cloud tasks in which you get a record, process it, and compile the result. The AI workload is parallel, which is why GPUs have become popular. GPUs excel at parallel processing.
“Given this new workload development, it’s likely Intel is tuning Xeon to that. That might mean some changes to software tools and tweaks in the instruction set,” McCarron said.
The new chips will boast advanced processing features that will bring big performance gains to AI tasks, Davis said. One such feature is support for AVX-512, which brings more floating point performance and security features to the chip. AVX-512 is being adapted from Intel’s latest supercomputing chip, the 72-core Xeon Phi, which is code-named Knights Landing.
The next highlight is on-chip support for Intel OmniPath, that is a proprietary high-speed interconnect that links servers, storage, networking, and other data center hardware. This feature is also called the key to design new data centers in which memory and storage are pulled out of servers and put in discrete boxes that are interconnected. It further supports to cram more memory, storage and processing into the servers and increases energy efficiency at the same time.