26153

Dopia: Online Parallelism Management for Integrated CPU/GPU Architectures

Younghyun Cho, Jiyeon Park, Florian Negele, Changyeon Jo, Thomas R. Gross, Bernhard Egger
University of California, Berkeley
27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP ’22), 2022

@article{cho2022dopia,

   title={Dopia: Online Parallelism Management for Integrated CPU/GPU Architectures},

   author={Cho, Younghyun and Park, Jiyeon and Negele, Florian and Jo, Changyeon and Gross, Thomas R and Egger, Bernhard},

   year={2022}

}

Download Download (PDF)   View View   Source Source   

943

views

Recent desktop and mobile processors often integrate CPU and GPU onto the same die. The limited memory bandwidth of these integrated architectures can negatively affect the performance of data-parallel workloads when all computational resources are active. The combination of active CPU and GPU cores achieving the maximum performance depends on a workload’s characteristics, making manual tuning a time-consuming task. Dopia is a fully automated framework that improves the performance of data-parallel workloads by adjusting the Degree Of Parallelism on Integrated Architectures. Dopia transparently analyzes and rewrites OpenCL kernels before executing them with the number of CPU and GPU cores expected to yield the best performance. Evaluated on AMD and Intel integrated processors, Dopia achieves 84% of the maximum performance attainable by an oracle.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: