14576

linalg: Matrix Computations in Apache Spark

Reza Bosagh Zadeh, Xiangrui Meng, Burak Yavuz, Aaron Staple, Li Pu, Shivaram Venkataraman, Evan Sparks, Alexander Ulanov, Matei Zaharia
Stanford and Databricks, 475 Via Ortega, Stanford, CA 94305
arXiv:1509.02256 [cs.DC], (8 Sep 2015)

@article{zadeh2015linalg,

   title={linalg: Matrix Computations in Apache Spark},

   author={Zadeh, Reza Bosagh and Meng, Xiangrui and Yavuz, Burak and Staple, Aaron and Pu, Li and Venkataraman, Shivaram and Sparks, Evan and Ulanov, Alexander and Zaharia, Matei},

   year={2015},

   month={sep},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

We describe matrix computations available in the cluster programming framework, Apache Spark. Out of the box, Spark comes with the mllib.linalg library, which provides abstractions and implementations for distributed matrices. Using these abstractions, we highlight the computations that were more challenging to distribute. When translating single-node algorithms to run on a distributed cluster, we observe that often a simple idea is enough: separating matrix operations from vector operations and shipping the matrix operations to be ran on the cluster, while keeping vector operations local to the driver. In the case of the Singular Value Decomposition, by taking this idea to an extreme, we are able to exploit the computational power of a cluster, while running code written decades ago for a single core. We conclude with a comprehensive set of benchmarks for hardware accelerated matrix computations from the JVM, which is interesting in its own right, as many cluster programming frameworks use the JVM.
Rating: 1.5/5. From 2 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: