Talks

INV-ASKIT: A Parallel Fast Direct Solver for Kernel Matrices

111
reads

Chenghan Yu
2016-01-19  15:00 - 16:00
Room 430, Astronomy and Mathematics Building



We present a parallel algorithm for computing the approximate factorization of an N-by-N kernel matrix. Once this factorization has been constructed (with Nlog^2(N) work) we can solve linear systems with this matrix with Nlog(N) work. Kernel matrices represent pairwise interactions of points in metric spaces. They appear in machine learning, approximation theory, and computational physics. Kernel matrices are typically dense (matrix multiplication scales quadratically with N) and ill-conditioned (solves can require 100s of Krylov iterations). Thus, fast algorithms for matrix multiplication and factorization are critical for scalability. Recently we introduced ASKIT, a new method for approximating a kernel matrix that resembles N-body methods. Here we introduce INV-ASKIT, a factorization scheme. We describe the new method, derive complexity estimates, and conduct an empirical study of its accuracy and scalability. We report results on real-world datasets including ``COVTYPE'' (0.5M points in 54 dimensions), ``SUSY'' (4.5M points in 8 dimensions) and ``MNIST'' (2M points in 784 dimensions) using shared and distributed memory parallelism. In our largest run we approximately factorize a dense matrix of size 32M-by-32M (generated from points in 64 dimensions) on 4,096 Sandy-Bridge cores. To our knowledge this results improve the state of the art by several orders of magnitude