11869
Jan Sikorski
Relativistic hydrodynamics became a very useful tool in high-energy physics after Landau’s application of this theory for explaining data on proton-proton collisions. It’s later application to heavy ion collisions has been very successful in modeling apparent collective behaviour of hot matter produced in such collisions. This work is a part of an effort of the […]
View View   Download Download (PDF)   
D.P. Playne, K.A. Hawick
Field equations can be numerically simulated by approximating a continuous space field by a discrete lattice. There are a number of different lattice geometries that can be used to approximate continuous space which may cause structural artefacts in the simulation. These different lattice structures require the use of different stencil operators to approximate the spatial […]
View View   Download Download (PDF)   
Kris T. Delaney, Glenn H. Fredrickson
We report the first CUDA graphics-processing-unit (GPU) implementation of the polymer field-theoretic simulation framework for determining fully fluctuating expectation values of equilibrium properties for periodic and select aperiodic polymer systems. Our implementation is suitable both for self-consistent field theory (mean-field) solutions of the field equations, and for fully fluctuating simulations using the complex Langevin approach. […]
View View   Download Download (PDF)   
Daniel Peter Playne
This thesis describes a generative programming system that automatically constructs parallel simulations of complex systems that are based on field equations using finite differencing and explicit Runge-Kutta integration methods. Programming computational simulations by hand for different parallel architectures is both tedious and time consuming. Simulation frameworks struggle to target different architectures without losing performance. Automating […]
View View   Download Download (PDF)   
Tyler Killian, Daniel L. Faircloth, Sadasiva M. Rao
In this paper, we have shown that exploitation of the GPU’s massively parallel architecture can dramatically increase the speed of MoM calculations. While the code can certainly be improved, matrix fill speed-up factors are already commonly found to be between 150X-260X. The conjugate gradient solver stands to be improved at this writing but still results […]
View View   Download Download (PDF)   
A. Leist, D. P. Playne, K. A. Hawick
Graphical processing units (GPUs) have recently attracted attention for scientific applications such as particle simulations. This is partially driven by low commodity pricing of GPUs but also by recent toolkit and library developments that make them more accessible to scientific programmers. We discuss the application of GPU programming to two significantly different paradigms - regular mesh field […]
View View   Download Download (PDF)   

* * *

* * *

Like us on Facebook

HGPU group

129 people like HGPU on Facebook

Follow us on Twitter

HGPU group

1190 peoples are following HGPU @twitter

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 13.1
  • SDK: AMD APP SDK 2.9
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.2
  • SDK: nVidia CUDA Toolkit 6.0.1, AMD APP SDK 2.9

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2014 hgpu.org

All rights belong to the respective authors

Contact us: