29418

A Study of Performance Programming of CPU, GPU accelerated Computers and SIMD Architecture

Xinyao Yi
Department of Computer Science, University of North Carolina at Charlotte, Charlotte, North Carolina, USA
arXiv:2409.10661 [cs.DC], (16 Sep 2024)

@misc{yi2024studyperformanceprogrammingcpu,

   title={A Study of Performance Programming of CPU, GPU accelerated Computers and SIMD Architecture},

   author={Xinyao Yi},

   year={2024},

   eprint={2409.10661},

   archivePrefix={arXiv},

   primaryClass={cs.DC},

   url={https://arxiv.org/abs/2409.10661}

}

Download Download (PDF)   View View   Source Source   

353

views

Parallel computing is a standard approach to achieving high-performance computing (HPC). Three commonly used methods to implement parallel computing include: 1) applying multithreading technology on single-core or multi-core CPUs; 2) incorporating powerful parallel computing devices such as GPUs, FPGAs, and other accelerators; and 3) utilizing special parallel architectures like Single Instruction/Multiple Data (SIMD). Many researchers have made efforts using different parallel technologies, including developing applications, conducting performance analyses, identifying performance bottlenecks, and proposing feasible solutions. However, balancing and optimizing parallel programs remain challenging due to the complexity of parallel algorithms and hardware architectures. Issues such as data transfer between hosts and devices in heterogeneous systems continue to be bottlenecks that limit performance. This work summarizes a vast amount of information on various parallel programming techniques, aiming to present the current state and future development trends of parallel programming, performance issues, and solutions. It seeks to give readers an overall picture and provide background knowledge to support subsequent research.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: