Parallel SAT-Solving with OpenCL

Sander Beckers, Gorik De Samblanx, Floris De Smedt, Toon Goedeme, Lars Struyf, Joost Vennekens
EAVISE, Lessius Mechelen – Campus De Nayer, Belgium
Proceedings of the IADIS Applied Computing 2011, AC 2011, 2011



   author={Beckers, S. and De Samblanx, G. and De Smedt, F. and Goedem{‘e}, T. and Struyf, L. and Vennekens, J.},



Download Download (PDF)   View View   Source Source   



In the last few decades there have been substantial improvements in approaches for solving the Boolean satisfiability problem. Many of these improvements consisted in elaborating on existing algorithms. On the side of the complete solvers this led to more efficient branching heuristics and the use of watched literals for unit propagation; incomplete solvers on the other hand have benefited from using random walks and from integrating evolutionary algorithms into the search process. Another line of research has combined these developments by creating hybrid solvers. More recently, the rapid development of parallel architectures has led to improvements in the implementation of algorithms as well, both on the side of complete as on the side of incomplete solvers. Up until now this work has been limited mostly to grids or multicore CPU’s. Recent advances such as the OpenCL framework however, make it possible to also exploit the processing power of today’s GPU’s, and do so together with the CPU. Our goal is to take advantage of this, by implementing a parallel version of a hybrid algorithm that has been proven remarkably succesful in the class of locally inconsistent SAT problems. By parallelizing local search, our solver will be competitive on consistent problems as well. The workload of the hybrid algorithm will be split between the GPU and CPU, the former performing a massively parallel local search and the latter an adapted version of MiniSAT, a popular complete solver. We present here the results for a first simplified implementation, and for simulations of the final solver.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: