Date of Award
Open Access Dissertation
Doctor of Philosophy in Mathematical Sciences (PhD)
Administrative Home Department
Department of Mathematical Sciences
Committee Member 1
Committee Member 2
Matrix approximations are widely used to accelerate many numerical algorithms. Current methods sample row (or column) spaces to reduce their computational footprint and approximate a matrix A with an appropriate embedding of the data sampled. This work introduces a novel family of randomized iterative algorithms which use significantly less data per iteration than current methods by sampling input and output spaces simultaneously. The data footprint of the algorithms can be tuned (independent of the underlying matrix dimension) to available hardware. Proof is given for the convergence of the algorithms, which are referred to as sub-sampled, in terms of numerically tested error bounds. A heuristic accelerated scheme is developed and compared to current algorithms on a substantial test-suite of matrices. The sub-sampled algorithms provide a lightweight framework to construct more useful inverse and low rank matrix approximations. Modifying the sub-sampled algorithms gives families of methods which iteratively approximate the inverse of a matrix whose accelerated variant is comparable to current state of the art methods. Inserting a compression step in the algorithms gives low rank approximations having accelerated variants which have fixed computational as well as storage footprints.
Azzam, Joy, "Sub-Sampled Matrix Approximations", Open Access Dissertation, Michigan Technological University, 2020.