8879

Hybrid CPU-GPU Distributed Framework for Large Scale Mobile Networks Simulation

Ben Romdhanne Bilel, Nikaein Navid, Mohamed Said Mosli Bouksiaa, Christian Bonnet
EURECOM, Department of communications mobiles, Campus Sophia Tech, 450 route des Chappes, B.P. 193, 06410 Biot, France
Research Report RR-12-268, 2012
@article{bonnet2012hybrid,

   title={Hybrid CPU-GPU Distributed Framework for Large Scale Mobile Networks Simulation},

   author={Bonnet, C.},

   year={2012}

}

Download Download (PDF)   View View   Source Source   

363

views

Most of the existing packet-level simulation tools are designed to perform experiments modeling a small to medium scale networks. The main reason of this limitation is the amount of available computation power and memory in quasi mono-process simulation environment. To enable efficient packet-level simulation for large scale scenario, we introduce a new CPUGPU co-simulation framework where synchronization and experiment design are performed on CPU and node’s processes are executed in parallel on GPU according to the master/worker model [13]. The framework is developed using Compute-Unified Device Architecture (CUDA) and denoted as Cunetsim [18], CUDA network simulator. To study the performance gain when GPU is used, we also introduce the CPU-legacy version of Cunetsim optimized for multi-core architecture. In this work, we present Cunetsim architecture, design concept, and features. We evaluate the performance of Cunetsim (both versions) compared to Sinalgo and NS-3 using benchmark scenarios [20]. Evaluation results show that Cunetsim execution time remains stable and that it achieves significantly lower computation time than CPU-based simulators for both static and mobile networks with no degradation in the accuracy of the results. We also study the impact of the hardware configuration on the performance gain and the simulation correctness. Cunetsim presents a proof of concept, demonstrating the feasibility of a fully GPU-based simulation rather than GPU-offloading or partial acceleration, through adequate architecture.
VN:F [1.9.22_1171]
Rating: 4.0/5 (2 votes cast)
Hybrid CPU-GPU Distributed Framework for Large Scale Mobile Networks Simulation, 4.0 out of 5 based on 2 ratings

* * *

* * *

Like us on Facebook

HGPU group

136 people like HGPU on Facebook

Follow us on Twitter

HGPU group

1204 peoples are following HGPU @twitter

Featured events

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 13.1
  • SDK: AMD APP SDK 2.9
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.2
  • SDK: nVidia CUDA Toolkit 6.0.1, AMD APP SDK 2.9

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2014 hgpu.org

All rights belong to the respective authors

Contact us: