Robin Chacko

Robin Chacko

Lecture Description:

Recent research in the Artificial Intelligence domain have shown significant advantage in Machine learning over traditional algorithms based on handcrafted features and models. Along with availability of credible data and computation resources, researchers are going deep with the Neural Networks (Deep Learning) for solving various image, speech and video recognition tasks. But the high computation and storage complexity of neural network inference poses great difficulty on its application. Besides, CPUs and GPUSs, FPGAs are becoming a platform candidate to achieve energy efficient neural network processing. In this paper, we will cover Xilinx strategies to accelerate these algorithms for real-time inference on their hardware.

Share by: