12792
Di Zhao
High-accuracy optimization is the key component of time-sensitive applications in computer sciences such as machine learning, and we develop single-GPU Iterative Discrete Approximation Monte Carlo Optimization (IDA-MCS) and multi-GPU IDA-MCS in our previous research. However, because of the memory capability constrain of GPUs in a workstation, single-GPU IDA-MCS and multi-GPU IDA-MCS may be in low […]
View View   Download Download (PDF)   
Tomas Ekeberg, Stefan Engblom, Jing Liu
The classical method of determining the atomic structure of complex molecules by analyzing diffraction patterns is currently undergoing drastic developments. Modern techniques for producing extremely bright and coherent X-ray lasers allow a beam of streaming particles to be intercepted and hit by an ultrashort high energy X-ray beam. Through machine learning methods the data thus […]
View View   Download Download (PDF)   
Karthikeyan Vaidyanathan, Kiran Pamnany, Dhiraj D. Kalamkar, Alexander Heinecke, Mikhail Smelyanskiy, Jongsoo Park, Daehyun Kim, Aniruddha Shet G, Bharat Kaul, Balint Joo, Pradeep Dubey
Intel Xeon Phi coprocessor-based clusters offer high compute and memory performance for parallel workloads and also support direct network access. Many real world applications are significantly impacted by network characteristics and to maximize the performance of such applications on these clusters, it is particularly important to effectively saturate network bandwidth and/or hide communications latency. We […]
View View   Download Download (PDF)   
Jianting Zhang, Dali Wang
Hardware Accelerators are playing increasingly important roles in achieving desired performance from desktop to cluster computing. While General Purpose computing on Graphics Processing Units (GPGPU) technologies have been widely applied to computing intensive applications, there are relatively little work on using GPUs and GPU-accelerated clusters for data intensive computing that typically involves significant irregular data […]
View View   Download Download (PDF)   
Robert B. Gramacy, Jarad Niemi, Robin Weiss
We explore how the big-three computing paradigms — symmetric multi-processor (SMC), graphical processing units (GPUs), and cluster computing — can together be brought to bare on large-data Gaussian processes (GP) regression problems via a careful implementation of a newly developed local approximation scheme. Our methodological contribution focuses primarily on GPU computation, as this requires the […]
View View   Download Download (PDF)   
Chao-Tung Yang, Wei-Shen Ou
At present, NVIDIA’s CUDA can support programmers to develop highly parallel applications. It utilizes some parallel construct concepts: hierarchical thread blocks, shared memory, and barrier synchronization. CUDA development programs can be used to achieve amazing acceleration. The graphics processor is able to play an important role in cloud computing in a cluster environment, because it […]
View View   Download Download (PDF)   
Jens Breitbart
It is expected that the first exascale supercomputer will be deployed within the next 10 years, however both its CPU architecture and programming model are not known yet. Multicore CPUs are not expected to scale to the required number of cores per node, but hybrid multicore CPUs consisting of different kinds of processing elements are […]
View View   Download Download (PDF)   
Tiago Filipe Rodrigues Ribeiro
In the last few years, the computing systems processing capabilities have increased significantly, changing from single-core to multi-core and even many-core systems. Accompanying this evolution, local networks have also become faster, with multi-gigabit technologies like Infiniband, Myrinet and 10G Ethernet. Parallel/distributed programming tools and standards, like POSIX Threads, OpenMP and MPI, have helped to explore […]
View View   Download Download (PDF)   
Mohamed Khalifa
In 1980’s time, people believed that computer would help to create more faster and efficient processors. But parallel processing challenged the idea. It joined two or more computers together to solve a problem jointly. It was a trend in 1990 to move away from expansive super computers towards network computers like PCs or Workstations. It […]
View View   Download Download (PDF)   
Bruno Gouvea de Barros, Rafael Sachetto Oliveira, Wagner Meira Jr., Marcelo Lobosco, Rodrigo Weber dos Santos
Key aspects of cardiac electrophysiology, such as slow conduction, conduction block, and saltatory effects have been the research topic of many studies since they are strongly related to cardiac arrhythmia, reentry, fibrillation, or defibrillation. However, to reproduce these phenomena the numerical models need to use subcellular discretization for the solution of the PDEs and nonuniform, […]
View View   Download Download (PDF)   
Albano Alves, Jose Rufino, Antonio Pina, Luis Paulo Santos
Clusters that combine heterogeneous compute device architectures, coupled with novel programming models, have created a true alternative to traditional (homogeneous) cluster computing, allowing to leverage the performance of parallel applications. In this paper we introduce clOpenCL, a platform that supports the simple deployment and efficient running of OpenCL-based parallel applications that may span several cluster […]
View View   Download Download (PDF)   
Heru Suhartanto, Arry Yanuar, Ari Wibisono
One of application that needs high performance computing resources is molecular d ynamic. There is some software available that perform molecular dynamic, one of these is a well known GROMACS. Our previous experiment simulating molecular dynamics of Indonesian grown herbal compounds show sufficient speed up on 32 n odes Cluster computing environment. In order to […]
View View   Download Download (PDF)   
Page 1 of 3123

* * *

* * *

Like us on Facebook

HGPU group

149 people like HGPU on Facebook

Follow us on Twitter

HGPU group

1236 peoples are following HGPU @twitter

Featured events

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 13.1
  • SDK: AMD APP SDK 2.9
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.2
  • SDK: nVidia CUDA Toolkit 6.0.1, AMD APP SDK 2.9

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2014 hgpu.org

All rights belong to the respective authors

Contact us: