8213

High Performance Computing on Astrophysics with Artificial Intelligence Algorithms

Juan C. Cuevas-Tello
Engineering Faculty, Autonomous University of San Luis Potosi, Mexico
Joint GISELA-CHAIN Conference, 2012
@article{cuevas2012high,

   title={High Performance Computing on Astrophysics with Artificial Intelligence Algorithms},

   author={Cuevas-Tello, Juan C.},

   year={2012}

}

Download Download (PDF)   View View   Source Source   

388

views

This paper presents the applications that have been developed in astrophysics by using Artificial Intelligence (AI) algorithms and high performance computing and the ongoing research with grid computing. In astrophysics, we deal with the time delay problem. Nowadays, the time delay is estimated from observed data gathered from radio or optical telescopes around the world. The time delay is estimated from pairs of noisy irregularly sampled time series. The time delay in astrophysics has many applications: i) The Hubble parameter, to measure the age of the universe, ii) to measure dark matter in the universe, among many other applications. We use AI algorithms to analyse these time series. We have used kernel-based methods, genetic algorithms and artificial neural networks and multiobjective optimisation, mainly. The running time or time complexity of the above AI algorithms is high. Therefore, we have developed parallel algorithms based on AI algorithms to estimate the time delay. So far we have tested parallel algorithms on beowulf-type clusters, supercomputers and Graphics Processing Units (GPUs). We present results from: 1) A parallel algorithm for time delay estimation on supernovae data by using artificial neural networks, in particular the General Regression Neural Networks model (GRNN). 2) A performance analysis of GRNN with two approaches: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) language on GPUs. We use real and artificial data to test the performance of algorithms. We analyse data from the quasar Q0957+561, which is the most studied so far. 3) A multiobjective optimisation approach for time delay estimation with multiple data sets from the same quasar. The motivation of this research on time delay is because new projects for ambitious surveys like Large Synoptic Survey Telescope (LSST) and the SuperNova Acceleration Probe (SNAP) devoted to study dark matter are in development. Moreover, current surveys like The Sloan Digital Sky Survey (SDSS) and Sloan Lens ACS (SLACS) are generating a tremendous amount of large monitoring data sets. Therefore, time delay estimations become a big issue. However, it is important to have a methodology that allows estimating a time delay accurately, but efficiently. We have shown that the AI algorithms perform better than state-of-the-art methods, that is, they are accurate algorithms. Moreover, the AI algorithms estimate time delays in automatic way, so they can be linked to LSST, SNAP, SDSS and SLACS projects. Now the challenge is to find fast AI algorithms. Therefore this paper presents the ongoing research on grid computing, so one can run parallel algorithms on a grid infrastructure. We hope to obtain better results than beowulf clusters, supercomputers and GPUs.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

Like us on Facebook

HGPU group

122 people like HGPU on Facebook

Follow us on Twitter

HGPU group

1179 peoples are following HGPU @twitter

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 13.1
  • SDK: AMD APP SDK 2.9
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.2
  • SDK: nVidia CUDA Toolkit 6.0.1, AMD APP SDK 2.9

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2014 hgpu.org

All rights belong to the respective authors

Contact us: