Clementine Maurice, Christoph Neumann, Olivier Heen, Aurelien Francillon
General-Purpose computing on Graphics Processing Units (GPGPU) combined to cloud computing is already a commercial success. However, there is little literature that investigates its security implications. Our objective is to highlight possible information leakage due to GPUs in virtualized and cloud computing environments. We provide insight into the different GPU virtualization techniques, along with their […]
View View   Download Download (PDF)   
Andrew J. Younge, John Paul Walters, Steve Crago, Geoffrey C. Fox
With the advent of virtualization and Infrastructure-as-a-Service (IaaS), the broader scientific computing community is considering the use of clouds for their technical computing needs. This is due to the relative scalability, ease of use, advanced user environment customization abilities clouds provide, as well as many novel computing paradigms available for data-intensive applications. However, there is […]
View View   Download Download (PDF)   
Mathias Gottschlag, Marius Hillenbrand, Jens Kehne, Jan Stoess, Frank Bellosa
Over the last few years, running high performance computing applications in the cloud has become feasible. At the same time, GPGPUs are delivering unprecedented performance for HPC applications. Cloud providers thus face the challenge to integrate GPGPUs into their virtualized platforms, which has proven difficult for current virtualization stacks. In this paper, we present LoGV, […]
View View   Download Download (PDF)   
Samuel Schlachter, Stephen Herbein, Michela Taufer, Shuching Ou, Sandeep Patel, Jeremy S. Logan
Efficiently studying Sodium Dodecyl Sulfate (SDS) molecules’ formations in the presence of different molar concentrations on high-end GPU clusters whose nodes share accelerators exposes us to several challenges, including the need to dynamically adapt the job lengths. Neither virtualization nor lightweight OS solutions can easily support generality, portability, and maintainability in concert. Our solution complements […]
View View   Download Download (PDF)   
Chao-Tung Yang, Wei-Shen Ou
At present, NVIDIA’s CUDA can support programmers to develop highly parallel applications. It utilizes some parallel construct concepts: hierarchical thread blocks, shared memory, and barrier synchronization. CUDA development programs can be used to achieve amazing acceleration. The graphics processor is able to play an important role in cloud computing in a cluster environment, because it […]
View View   Download Download (PDF)   
Alvaro Luiz Fazenda, Celso L. Mendes, Laxmikant V. Kale, Jairo Panetta, Eduardo Rocha Rodrigues
The dynamic load-balancing framework in Charm++/AMPI, developed at the University of Illinois, is based on using processor virtualization to allow thread migration across processors. This framework has been successfully applied to many scientific applications in the past, such as BRAMS, NAMD, ChaNGa, and others. Most of these applications use only CPUs to perform their operations. […]
View View   Download Download (PDF)   
C. Reano, R. Mayo, and E. S. Quintana-Orti, F. Silla and J. Duato, A. J. Pena
The use of GPUs to accelerate general-purpose scientific and engineering applications is mainstream today, but their adoption in current high-performance computing clusters is impaired primarily by acquisition costs and power consumption. Therefore, the benefits of sharing a reduced number of GPUs among all the nodes of a cluster can be remarkable for many applications. This […]
View View   Download Download (PDF)   
Mathias Gottschlag
Recently, cloud computing providers have started to offer virtual machines specifically for high performance computing as a service (HPCaaS). The cloud computing providers usually employ virtualization as an abstraction layer between the application software and the underlying hardware. Virtualization allows flexible migration between physical systems, which is a requirement for many load balancing techniques. In […]
View View   Download Download (PDF)   
Kuan-Ching Li, Keunsoo Kim, Won W. Ro, Tien-Hsiung Weng, Che-Lun Hung, Chen-Hao Ku, Albert Cohen, Jean-Luc Gaudiot
In this research, we target at the investigation of a dynamic energy-aware management framework on the execution of independent workloads (e.g., bag-of-tasks) in hybrid CPU-GPU PARA-computing platforms, aiming at optimizing the execution of workloads in appropriate computing resources concurrently while balancing the use of solely virtual or physical resources or hybridly selected resources, to achieve […]
View View   Download Download (PDF)   
Palden Lama, Yan Li, Ashwin M. Aji, Pavan Balaji, James Dinan, Shucai Xiao, Yunquan Zhang, Wu-chun Feng, Rajeev Thakur, Xiaobo Zhou
Power-hungry Graphics processing unit (GPU) accelerators are ubiquitous in high performance computing data centers today. GPU virtualization frameworks introduce new opportunities for effective management of GPU resources by decoupling them from application execution. However, power management of GPU-enabled server clusters faces significant challenges. The underlying system infrastructure shows complex power consumption characteristics depending on the […]
View View   Download Download (PDF)   
Ahmet Erdem Sariyuce, Kamer Kaya, Erik Saule, Umit V. Catalyurek
The betweenness centrality metric has always been intriguing for graph analyses and used in various applications. Yet, it is one of the most computationally expensive kernels in graph mining. In this work, we investigate a set of techniques to make the betweenness computations faster on GPUs as well as on heterogeneous CPU/GPU architectures. Our techniques […]
Kristoffer Robin Stokke
In modern computing, the Graphical Processing Unit (GPU) has proven its worth beyond that of graphics rendering. Its usage is extended into the field of general purpose computing, where applications exploit the GPU’s massive parallelism to accelerate their tasks. Meanwhile, Virtual Machines (VM) continue to provide utility and security by emulating entire computer hardware platforms […]
Page 1 of 41234

* * *

* * *

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 11.4
  • SDK: AMD APP SDK 2.8
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.2
  • SDK: nVidia CUDA Toolkit 5.0.35, AMD APP SDK 2.8

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2014 hgpu.org

All rights belong to the respective authors

Contact us:

contact@hgpu.org