A set of software routines has been released by Nvidia for accelerating machine-learning algorithms on its parallelised graphics processors. Over the weekend, the GPU maker uploaded cuDNN or CUDA Deep Neural Networks. It’s a library of primitives for building software that trains neutral networks.
The component is especially optimised for Nvidia’s processors and it’s expected to save programmers’ time. If the library is used then developers need not to reinvent the wheel when tuning parallelised machine-learning algorithms for GPUs. The mathematical work is off-loaded from the host’s application CPU. Nvidia also pointed to some examples of machine learning and neutral networks which are used by financial companies, web firms and research bodies while fraud detection and gaming.
Nvidia also emphasised on performing these tasks by processing images through handwriting and facial recognition. The company’s solutions architect Larry Brown posted in his blog, “The success of DNNs has been greatly accelerated by using GPUs, which have become the platform of choice for training large, complex, DNN-based ML systems.” Brown also added that Nvidia has been introducing the library due to the importance of DNNs and the key role of GPUs.