15764

To Co-Run, or Not To Co-Run: A Performance Study on Integrated Architectures

Feng Zhang, Jidong Zhai, Wenguang Chen, Bingsheng He, Shuhao Zhang
Department of Computer Science and Technology, Tsinghua University, Beijing, China
IEEE 23rd International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS), 2015
@inproceedings{zhang2015co,

   title={To Co-Run, or Not To Co-Run: A Performance Study on Integrated Architectures},

   author={Zhang, Feng and Zhai, Jidong and Chen, Wenguang and He, Bingsheng and Zhang, Shuhao},

   booktitle={Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS), 2015 IEEE 23rd International Symposium on},

   pages={89–92},

   year={2015},

   organization={IEEE}

}

Architecture designers tend to integrate both CPU and GPU on the same chip to deliver energy-efficient designs. To effectively leverage the power of both CPUs and GPUs on integrated architectures, researchers have recently put substantial efforts into co-running a single application on both the CPU and the GPU of such architectures. However, few studies have been performed to analyze a wide range of parallel computation patterns on such architectures. In this paper, we port all programs in Rodinia benchmark suite and co-run these programs on the integrated architecture. We find that co-running results are not always better than running the application on the CPU only or the GPU only. Among the 20 programs, 3 programs can benefit from co-running, 12 programs using GPU only and 2 programs using CPU only achieve the best performance. The remaining 3 programs show no performance preference for different devices. We also characterize the workload and summarize the patterns for the system insights of co-running on integrated architectures.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1474935068
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1474935068
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => 5GnxKI98rkTph9uvM6vRv7VfffU=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook
Follow us on Twitter

HGPU group

1997 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: