Reducing the Size of Nurbs Controls Nets Using Genetic Algorithms and CUDA

Matthew J. O’Neal, Cameron J. Turner
Colorado School of Mines, Division of Engineering, 1500 Illinois Street, Golden, Colorado, USA 80401
ASME 2011 International Mechanical Engineering Congress and Exposition (IMECE2011), 2011



   author={O’Neal, M.J. and Turner, C.J.},



Download Download (PDF)   View View   Source Source   



The typical goals for defining a control net for a NonUniform Rations B-spline (NURBs) based metamodel from a given set of data the desired result is the smallest set of control points in the least possible time while minimizing local and/or global error. Current metamodel fitting algorithms iteratively find and eliminate the largest sources of local error, thus creating a very accurate control net with sub-optimal size. Since, control net size is directly related to the speed the control net can be searched for global optima, the size must be reduced as much as possible without increasing local or global error. Current algorithms can model discontinuous portions of data by clustering numerous control points close together. This is both inefficient to search and may cause the search algorithm to become numerically unstable and crash because of control points placed too closely together. Furthermore, many current algorithms do not take advantage of the weight property of each control point. In this paper, a Genetic Algorithm (GA) is used to optimize existing control nets as well as create new control nets from data sets. In order to offer a comparable creation time for the control net, parallel programming techniques are used incorporating the CUDA GPU Architecture. CUDA was chosen because it is low cost, highly parallel architecture available on millions of computers. The code is intended for a single desktop computer, running a maximum of 4 CUDA devices, not a CUDA cluster. This approach may have fitting applications to both metamodels and geometric fitting of NURBs objects.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: