Intro

Mahout has a distributed implementation of QR decomposition for tall thin matrices1.

Algorithm

For the classic QR decomposition of the form a distributed version is fairly easily achieved if is tall and thin such that fits in memory, i.e. m is large but n < ~5000 Under such circumstances, only and are distributed matrices and and are in-core products. We just compute the in-core version of the Cholesky decomposition in the form of . After that we take and . The latter is easily achieved by multiplying each vertical block of by . (There is no actual matrix inversion happening).

Implementation

Mahout dqrThin(...) is implemented in the mahout math-scala algebraic optimizer which translates Mahout’s R-like linear algebra operators into a physical plan for both Spark and H2O distributed engines.

def dqrThin[K: ClassTag](A: DrmLike[K], checkRankDeficiency: Boolean = true): (DrmLike[K], Matrix) = {        
    if (drmA.ncol > 5000)
        log.warn("A is too fat. A'A must fit in memory and easily broadcasted.")
    implicit val ctx = drmA.context
    val AtA = (drmA.t %*% drmA).checkpoint()
    val inCoreAtA = AtA.collect
    val ch = chol(inCoreAtA)
    val inCoreR = (ch.getL cloned) t
    if (checkRankDeficiency && !ch.isPositiveDefinite)
        throw new IllegalArgumentException("R is rank-deficient.")
    val bcastAtA = sc.broadcast(inCoreAtA)
    val Q = A.mapBlock() {
        case (keys, block) => keys -> chol(bcastAtA).solveRight(block)
    }
    Q -> inCoreR
}

Usage

The scala dqrThin(...) method can easily be called in any Spark or H2O application built with the math-scala library and the corresponding Spark or H2O engine module as follows:

import org.apache.mahout.math._
import decompositions._
import drm._

val(drmQ, inCoreR) = dqrThin(drma)

References