Matrix functions play a central role in many areas of scientific computing, ranging from differential equations and network analysis to quantum mechanics and data science. Common examples include the matrix exponential, logarithm, square root and sign function. In many applications, one is interested not in the full matrix function f(A), but only in its action on a given vector, i.e. the product f(A)b. Krylov subspace methods offer an efficient way to perform this task, by relying only on matrix–vector products or linear system solves, rather than forming f(A) explicitly. By projecting the problem onto a low-dimensional polynomial or rational Krylov subspace built from A and b, these methods compute accurate approximations whose convergence is closely related to the quality of polynomial and rational approximations of the scalar function f.
The goal of this talk is to provide a broad and accessible introduction to Krylov subspace methods for matrix functions.
We begin by recalling the fundamental ideas of polynomial and rational approximation for scalar functions, which form the theoretical foundation of these methods. Building on this intuition, we will explore how Krylov subspaces can be used to approximate f(A)b efficiently, and highlight how the underlying polynomial and rational approximations lead to algorithms with distinct computational and convergence properties. Throughout the talk, we will demonstrate these concepts with a series of illustrative numerical examples.