14303

Posts

Jul, 24

Accelerating Protein Structure Prediction using Particle Swarm Optimization on GPU

Protein tertiary structure prediction (PSP) is one of the most challenging problems in bioinformatics. Different methods have been introduced to solve this problem so far, but PSP is computationally intensive and belongs to the NP-hard class. One of the best solutions to accelerate PSP is the use of a massively parallel processing architecture, such graphical […]
Jul, 24

Efficient Convolutional Patch Networks for Scene Understanding

In this paper, we present convolutional patch networks, which are convolutional (neural) networks (CNN) learned to distinguish different image patches and which can be used for pixel-wise labeling. We show how to easily learn spatial priors for certain categories jointly with their appearance. Experiments for urban scene understanding demonstrate state-of-the-art results on the LabelMeFacade dataset. […]
Jul, 24

Comparison between GPU and parallel CPU optimizations in viewshed analysis

Parallel CPU implementations of a viewshed algorithm using both multithreading and SIMD vectorization and GPU implementations were implemented and compared in this study. The results show that parallelism is essential for achieving good performance on a CPU, and that data transfer can be partly overlapped by computations to hide some of the overheads in GPU […]
Jul, 24

An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition

Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. […]
Jul, 24

Massively Deep Artificial Neural Networks for Handwritten Digit Recognition

Greedy Restrictive Boltzmann Machines yield an fairly low 0.72% error rate on the famous MNIST database of handwritten digits. All that was required to achieve this result was a high number of hidden layers consisting of many neurons, and a graphics card to greatly speed up the rate of learning.
Jul, 22

On the use of deep Boltzmann machines for road signs classification

The Deep Boltzmann Machine (DBM) has been proved to be one of the most effective machine learning generative models in discriminative tasks. They’ve been able to overcome other generative, and even discriminative models, on relatively simple tasks, such as digits recognition, and also on more complex tasks such as simple objects recognition. However, there’re only […]
Jul, 22

Boosting GPU Virtualization Performance with Hybrid Shadow Page Tables

The increasing adoption of Graphic Process Unit (GPU) to computation-intensive workloads has stimulated a new computing paradigm called GPU cloud (e.g., Amazon’s GPU Cloud), which necessitates the sharing of GPU resources to multiple tenants in a cloud. However, state-of-the-art GPU virtualization techniques such as gVirt still suffer from non-trivial performance overhead for graphics memory-intensive workloads […]
Jul, 22

Raspberry Pi based System for Visual Object Detection and Tracking

The aim of this thesis is to explore different methods for helping computers interpret the real world visually, investigate solutions to those methods offered by the open-sourced computer vision library, OpenCV, and implement some of these in a Raspberry Pi based application for detecting and keeping track of objects. The main focus rests on the […]
Jul, 22

Generating Binary Optimal Codes Using Heterogeneous Parallel Computing

Generation of optimal codes is a well known problem in coding theory. Many computational approaches exist in the literature for finding record breaking codes. However generating codes with long lengths n using serial algorithms is computationally very expensive, for example the worst case time complexity of a Greedy algorithm is O(n4^n). In order to improve […]
Jul, 22

An algebraic parallel treecode in arbitrary dimensions

We present a parallel treecode for fast kernel summation in high dimensions – a common problem in data analysis and computational statistics. Fast kernel summations can be viewed as approximation schemes for dense kernel matrices. Treecode algorithms (or simply treecodes) construct low-rank approximations of certain off-diagonal blocks of the kernel matrix. These blocks are identified […]
Jul, 20

Performance Analysis of GPU-Accelerated Filter-Based Source Finding for HI Spectral Line Image Data

Searching for sources of electromagnetic emission in spectral-line radio astronomy interferometric data is a computationally intensive process. Parallel programming techniques and High Performance Computing hardware may be used to improve the computational performance of a source finding program. However, it is desirable to further reduce the processing time of source finding in order to decrease […]
Jul, 20

Accelerating a Movie Recommender System Using VirtualCL on a Heterogeneous GPU Cluster

Present day market offers a large number of movies which overwhelm people with choices. In order to quickly navigate through all the possible movies and find the interesting ones, the user can take advantage of recommender systems for movies. This thesis studies a movie recommender system which uses image processing and computer vision algorithms. The […]
Page 3 of 81912345...102030...Last »

* * *

* * *

Follow us on Twitter

HGPU group

1512 peoples are following HGPU @twitter

Like us on Facebook

HGPU group

261 people like HGPU on Facebook

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 13.1
  • SDK: nVidia CUDA Toolkit 6.5.14, AMD APP SDK 3.0
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.3
  • SDK: AMD APP SDK 3.0

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2015 hgpu.org

All rights belong to the respective authors

Contact us: