15573

Faster and Cheaper: Parallelizing Large-Scale Matrix Factorization on GPUs

Wei Tan, Liangliang Cao, Liana Fong
IBM T. J. Watson Research Center, Yorktown Heights, NY, USA
arXiv:1603.03820 [cs.DC], (11 Mar 2016)
@article{tan2016faster,

   title={Faster and Cheaper: Parallelizing Large-Scale Matrix Factorization on GPUs},

   author={Tan, Wei and Cao, Liangliang and Fong, Liana},

   year={2016},

   month={mar},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

323

views

Matrix factorization (MF) is employed by many popular algorithms, e.g., collaborative filtering. The emerging GPU technology, with massively multicore and high intra-chip memory bandwidth but limited memory capacity, presents an opportunity for accelerating MF much further when appropriately exploiting the GPU architectural characteristics. This paper presents cuMF, a CUDA-based matrix factorization library that implements memory-optimized alternate least square (ALS) method to solve very large-scale MF. CuMF uses a variety set of techniques to maximize the performance on either single or multiple GPUs. These techniques include smart access of sparse data leveraging GPU memory hierarchy, using data parallelism in conjunction with model parallelism, minimizing the communication overhead between computing units, and utilizing a novel topology-aware parallel reduction scheme. With only a single machine with four Nvidia GPU cards, cuMF can be 6-10 times as fast, and 33-100 times as cost-efficient, compared with the state-of-art distributed CPU solutions. Moreover, this cuMF can solve the largest matrix factorization problem ever reported yet in current literature, while maintaining impressively good performance.
VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)
Faster and Cheaper: Parallelizing Large-Scale Matrix Factorization on GPUs, 5.0 out of 5 based on 1 rating

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1474965958
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1474965958
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => ARbKkE28caUZblqYLO/mHb8/q+0=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: