16893

A Framework for Dense Triangular Matrix Kernels on Various Manycore Architectures

Ali Charara, David Keyes, Hatem Ltaief
Extreme Computing Research Center, King Abdullah University of Science and Technology, Thuwal, Jeddah 23955, Saudi Arabia
King Abdullah University of Science and Technology, 2016

@article{charara2016framework,

   title={A Framework for Dense Triangular Matrix Kernels on Various Manycore Architectures},

   author={Charara, Ali and Keyes, David E and Ltaief, Hatem},

   journal={Submitted to CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCE},

   year={2016}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

326

views

We present a new high performance framework for dense triangular BLAS kernels, i.e., triangular matrix-matrix multiplication (TRMM) and triangular solve (TRSM), on various manycore architectures. This is an extension of a previous work on a single GPU by the same authors (Charara et al., EuroPar, 2016). In this paper, the performance of triangular BLAS kernels on a single GPU is further enhanced by implementing customized CUDA kernels for TRMM and TRSM, which are called at the bottom of the recursion tree. In addition, a multiple GPU implementation of TRMM and TRSM is proposed and shows an almost linear performance scaling, as the number of GPUs increases. Finally, the algorithmic recursive formulation of these triangular BLAS kernels is in fact oblivious to the targeted hardware architecture. We, therefore, port these recursive kernels to homogeneous x86 hardware architectures by relying on the vendor optimized BLAS implementations. Results reported on various hardware architectures highlight a significant performance improvement against state-of-the-art implementations. These new kernels are freely available in the KAUST BLAS (KBLAS) open-source library.
VN:F [1.9.22_1171]
Rating: 3.7/5 (3 votes cast)
A Framework for Dense Triangular Matrix Kernels on Various Manycore Architectures, 3.7 out of 5 based on 3 ratings

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1484680543
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1484680543
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => 9BkBoPvPfTO+3lSatB3fAE96qtU=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook
Follow us on Twitter

HGPU group

2129 peoples are following HGPU @twitter

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: