12698

A Domain Specific Approach to Heterogeneous Computing: From Availability to Accessibility

Gordon Inggs, David Thomas, Wayne Luk
Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
arXiv:1408.4965 [cs.CE], (21 Aug 2014)
@article{2014arXiv1408.4965I,

   author={Inggs}, G. and {Thomas}, D. and {Luk}, W.},

   title={"{A Domain Specific Approach to Heterogeneous Computing: From Availability to Accessibility}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1408.4965},

   primaryClass={"cs.CE"},

   keywords={Computer Science – Computational Engineering, Finance, and Science, Computer Science – Distributed, Parallel, and Cluster Computing, Computer Science – Performance, Computer Science – Programming Languages},

   year={2014},

   month={aug},

   adsurl={http://adsabs.harvard.edu/abs/2014arXiv1408.4965I},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

We advocate a domain specific software development methodology for heterogeneous computing platforms such as Multicore CPUs, GPUs and FPGAs. We argue that three specific benefits are realised from adopting such an approach: portable, efficient implementations across heterogeneous platforms; domain specific metrics of quality that characterise platforms in a form software developers will understand; automatic, optimal partitioning across the available computing resources. These three benefits allow a development methodology for software developers where they describe their computational problems in a single, easy to understand form, and after a modeling procedure on the available resources, select how they would like to trade between various domain specific metrics. Our work on the Forward Financial Framework (F^3) demonstrates this methodology in practise. We are able to execute a range of computational finance option pricing tasks efficiently upon a wide range of CPU, GPU and FPGA computing platforms. We can also create accurate financial domain metric models of walltime latency and statistical confidence. Furthermore, we believe that we can support automatic, optimal partitioning using this execution and modelling capability.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

Like us on Facebook

HGPU group

194 people like HGPU on Facebook

Follow us on Twitter

HGPU group

1331 peoples are following HGPU @twitter

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 13.1
  • SDK: AMD APP SDK 2.9
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.2
  • SDK: nVidia CUDA Toolkit 6.0.1, AMD APP SDK 2.9

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2014 hgpu.org

All rights belong to the respective authors

Contact us: