16751

Processing OLTP Workloads on Hybrid CPU/GPU Systems

Mudit Bhatnagar
University of Magdeburg
University of Magdeburg, 2016

@article{bhatnagar2016processing,

   title={Processing OLTP Workloads on Hybrid CPU/GPU Systems},

   author={Bhatnagar, Mudit},

   year={2016}

}

Download Download (PDF)   View View   Source Source   

1983

views

In recent times there have been a plethora of researches done on the utilization of co-processors like GPU and FPGA in database management system (DBMS). The reason for this trend is that modern processors have reached a performance threshold. Two major factors that have led to this behaviour are Memory Wall and Power Wall. This has forced hardware vendors to come up with specialized processors that focus on speeding up computation in specialized areas. Hence, we are moving towards the age of heterogeneous computing where an efficient co-processor can be used along with the traditional CPU to meet our performance requirements. Various researches in the recent past have shown that database systems can effectively use specialized processors, especially GPU to speed up query processing. This use of GPUs for acceleration of traditional computing system is called GPGPU (General Purpose GPU) based computing. To this end, we have been able to use GPUs as an effective coprocessor in OLAP scenarios with increased performance. The support for OLTP scenario is still under research with DBMS like GPUTx that executes bulk OLTP transactions as a single task on GPUs only system. In our work we are going to study the processing capabilities of heterogeneous (CPUGPU) systems in an OLTP scenario. For this we will implement the TPC-C Benchmark in a bulk query execution model. Our work will also compare the Row Store and Column Store based storage models on TPC-C Database and find out the most efficient storage mechanism for query execution in a CPU/GPU based heterogeneous system.
Rating: 2.5/5. From 4 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: