12535
Samuel W. Skillman, Michael S. Warren, Matthew J. Turk, Risa H. Wechsler, Daniel E. Holz, P. M. Sutter
The Dark Sky Simulations are an ongoing series of cosmological N-body simulations designed to provide a quantitative and accessible model of the evolution of the large-scale Universe. Such models are essential for many aspects of the study of dark matter and dark energy, since we lack a sufficiently accurate analytic model of non-linear gravitational clustering. […]
Christopher Schuster
SAT solving strategies that perform backtracking or clause learning are usually difficult to implement efficiently on massively-parallel architectures because the necessary synchronization does not scale linear with the number of processors available. Strategies like Lookahead Solving and Cube and Conquer are more promising. In order to evaluate a potential GPU implementation of Cube and Conquer, […]
Alessandro Dal Pal'u, Agostino Dovier, Andrea Formisano, Enrico Pontelli
The parallel computing power offered by Graphical Processing Units (GPUs) has been recently exploited to support general purpose applications-by exploiting the availability of general API and the SIMT-style parallelism present in several classes of problems (e.g., numerical simulations, matrix manipulations) – where relatively simple computations need to be applied to all items in large sets […]
Xinxin Mei, Kaiyong Zhao, Chengjian Liu, Xiaowen Chu
Memory access efficiency is a key factor for fully exploiting the computational power of Graphics Processing Units (GPUs). However, many details of the GPU memory hierarchy are not released by the vendors. We propose a novel fine-grained benchmarking approach and apply it on two popular GPUs, namely Fermi and Kepler, to expose the previously unknown […]
Aditya Deshpande
In earlier times, computer systems had only a single core or processor. In these computers, the number of transistors on-chip (i.e. on the processor) doubled every two years and all applications enjoyed free speedup. Subsequently, with more and more transistors being packed on-chip, power consumption became an issue, frequency scaling reached its limits and industry […]
Charalampos S. Kouzinopoulos, John-Alexander M. Assael, Themistoklis K. Pyrgiotis, Konstantinos G. Margaritis
Multiple matching algorithms are used to locate the occurrences of patterns from a finite pattern set in a large input string. Aho-Corasick and Wu-Manber, two of the most well known algorithms for multiple matching require an increased computing power, particularly in cases where large-size datasets must be processed, as is common in computational biology applications. […]
Hasse Lovgren
CONTEXT: Reinforcement Learning (RL) is a time consuming effort that requires a lot of computational power as well. There are mainly two approaches to improving RL efficiency, the theoretical mathematics and algorithmic approach or the practical implementation approach. In this study, the approaches are combined in an attempt to reduce time consumption. OBJECTIVES: We investigate […]
Bartosz D. Wozniak
Matrix multiplication is a fundamental linear algebra routine ubiquitous in all areas of science and engineering. Highly optimised BLAS libraries (cuBLAS and clBLAS on GPUs) are the most popular choices for an implementation of the General Matrix Multiply (GEMM) in software. However, performance of library GEMM is poor for small matrix sizes. In this thesis […]
M. Guthe
Despite the potential divergence of depth-first ray tracing [AL09], it is nevertheless the most efficient approach on massively parallel graphics processors. Due to the use of specialized caching strategies that were originally developed for texture access, it has been shown to be compute rather than bandwidth limited. Especially with recents developments however, not only the […]
Johann A. Briffa, Stephan Wesemeyer
In this study, we present SimCommSys, a simulator of communication systems that we are releasing under an open source license. The core of the project is a set of C + + libraries defining communication system components and a distributed Monte Carlo simulator. Of principal interest is the error-control coding component, where various kinds of […]
Jorge F. Fabeiro, Diego Andrade, Basilio B. Fraguela, Ramon Doallo
Heterogeneous systems are becoming increasingly common. Relatedly, the popularity of OpenCL is growing, as it provides a unified mean to program a wide variety of devices including GPUs or multicore CPUs. More recently, the Heterogeneous Programming Library (HPL) targets the same variety of systems as OpenCL, intending to improve their programmability. The main drawback of […]
Arne Vansteenkiste, Jonathan Leliaert, Mykola Dvornik, Felipe Garcia-Sanchez, Bartel Van Waeyenberge
We report on the design, verification and performance of mumax3, an open-source GPU-accelerated micromagnetic simulation program. This software solves the time- and space dependent magnetization evolution in nano- to micro scale magnets using a finite-difference discretization. Its high performance and low memory requirements allow for large-scale simulations to be performed in limited time and on […]
Page 1 of 7312345...102030...Last »

* * *

* * *

Like us on Facebook

HGPU group

127 people like HGPU on Facebook

Follow us on Twitter

HGPU group

1188 peoples are following HGPU @twitter

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 13.1
  • SDK: AMD APP SDK 2.9
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.2
  • SDK: nVidia CUDA Toolkit 6.0.1, AMD APP SDK 2.9

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2014 hgpu.org

All rights belong to the respective authors

Contact us: