2922

A massively parallel framework using P systems and GPUs

Jose M. Cecilia, Gines D. Guerrero, Jose M. Garcia, Miguel A. Martinez-del-Amor, Ignacio Perez-Hurtado, Mario J. Perez-Jimenez
Grupo de Arquitectura y Computacion Paralela, Dpto. Ingenieria y Tecnologia de Computadores, Universidad de Murcia, Campus de Espinardo, 30100 Murcia, Spain
Symposium on Application Accelerators in High Performance Computing, 2009 (SAAHPC’09)

@article{cecilia2009massively,

   title={A massively parallel framework using P systems and GPUs},

   author={Cecilia, J.M. and Guerrero, G.D. and Garc{i}a, J.M. and Mart{i}nez–del–Amor, M.A. and P{‘e}rez–Hurtado, I. and P{‘e}rez–Jim{‘e}nez, M.J.},

   booktitle={Application Accelerators in High Performance Computing, 2009 Symposium, Papers},

   year={2009}

}

Download Download (PDF)   View View   Source Source   

1769

views

Since CUDA programing model appeared on the general purpose computations, the developers can extract all the power contained in GPUs (Graphics Processing Unit) across many computational domains. Among these domains, P systems or membrane systems provide a high level computational modeling framework that allows, in theory, to obtain polynomial time solutions to NP-complete problems by trading time for space, and also to model biological phenomena in the area of computational systems biology. P systems are massively parallel distributed devices and their computation can be divided in two levels of parallelism: membranes, that can be expressed as blocks in CUDA programming model; and objects, that can be expressed as threads in CUDA programming model. In this paper, we present our initial ideas of developing a simulator for the class of recognizer P systems with active membranes by using the CUDA programing model to exploit the massively parallel nature of those systems at maximum. Experimental results of a preliminary version of our simulator on a Tesla C1060 GPU show a 60X of speed-up compared to the sequential code.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: