Fast approximate matrix multiplication – theory and practice in AI hardware

Abstract: Multiplying large, dense matrices is a key ingredient of deep networks, and state of the art architectures may spend more than 99% of time and energy consumption on matrix multiplication. Although fast matrix multiplication algorithms in time o(n^3) are known in the theoretical literature, they are impractical, and not used in the AI hardware and software ecosystem today. In this talk I will survey algorithms and heuristics for faster approximate matrix multiplication, and show the possible impact, as well as limitations, of such approaches on the future of deep learning. The results I will present will cover both academic research, as well as my more recent experience with cutting edge AI industry at deci.ai, where I have discovered a huge gap between the academic research and practical AI.

Short bio: Nir Ailon is a Computer Science professor at Technion in Haifa Israel. He completed his PhD in 2006 at Princeton University, and then continued as a postdoc at the Institute for Advanced Study in Princeton. He has served as faculty at Technion since 2011, and also spent time at Google Research, Yahoo! Research, as well as at other industrial research labs. He is a recipient of an ERC Consolidator Grant. His research spans theory of algorithms, mathematical foundations of big data and machine learning. He is currently involved in research on AI acceleration at deci.ai

Presenter: Assoc. Prof. Nir Ailon, Computer Science, Technion Israel Institute of Technology, (Haifa, Israel)

Date: 2022-04-07 11:30 (CEST)

Location: Salon de Actos Politecnica IV, Carretera San Vicente del Raspeig s/n, San Vicente del Raspeig 03690, Alicante ES