9413

An implementation of level set based topology optimization using GPU

D. Herrero, J. Martinez, P. Marti
Department of Structures and Construction, Technical University of Cartagena, Campus Muralla del Mar, 30202 Cartagena, Spain
10th World Congress on Structural and Multidisciplinary Optimization, 2013
@article{herrero2013implementation,

   title={An implementation of level set based topology optimization using GPU},

   author={Herrero, D and Mart{i}nez, J and Mart{i}, P},

   year={2013}

}

Download Download (PDF)   View View   Source Source   

1089

views

This work presents the implementation of a topology optimization approach based on level set method in massively parallel computer architectures, in particular on a Graphics Processing Unit (GPU). Such architectures are becoming so popular during last years for complex and tedious scientific computation. They are composed of dozens, hundreds, or even thousands of cores specially designed for parallel computing. The speedup process consists of using these graphic units to exploit data parallelism of expensive and parallelizable parts of the method, while non-parallelizable parts are calculated in standard processing units (CPUs). The paper analyzes the computational complexity of the different steps of the method. The parallelization of both the finite element method and the specific operations of the optimization approach are also analyzed. The implementation of the method is benchmarked with some tests. The massively parallel results are compared with the sequential version of the method. The results show the advantages and disadvantages of the implementation of this method using GPU.
VN:F [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)
An implementation of level set based topology optimization using GPU, 5.0 out of 5 based on 2 ratings

* * *

* * *

Like us on Facebook

HGPU group

218 people like HGPU on Facebook

Follow us on Twitter

HGPU group

1403 peoples are following HGPU @twitter

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 13.1
  • SDK: nVidia CUDA Toolkit 6.5.14, AMD APP SDK 3.0
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.2
  • SDK: AMD APP SDK 2.9

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2015 hgpu.org

All rights belong to the respective authors

Contact us: