High Performance Computing on Astrophysics with Artificial Intelligence Algorithms

Juan C. Cuevas-Tello
Engineering Faculty, Autonomous University of San Luis Potosi, Mexico
Joint GISELA-CHAIN Conference, 2012


   title={High Performance Computing on Astrophysics with Artificial Intelligence Algorithms},

   author={Cuevas-Tello, Juan C.},



Download Download (PDF)   View View   Source Source   



This paper presents the applications that have been developed in astrophysics by using Artificial Intelligence (AI) algorithms and high performance computing and the ongoing research with grid computing. In astrophysics, we deal with the time delay problem. Nowadays, the time delay is estimated from observed data gathered from radio or optical telescopes around the world. The time delay is estimated from pairs of noisy irregularly sampled time series. The time delay in astrophysics has many applications: i) The Hubble parameter, to measure the age of the universe, ii) to measure dark matter in the universe, among many other applications. We use AI algorithms to analyse these time series. We have used kernel-based methods, genetic algorithms and artificial neural networks and multiobjective optimisation, mainly. The running time or time complexity of the above AI algorithms is high. Therefore, we have developed parallel algorithms based on AI algorithms to estimate the time delay. So far we have tested parallel algorithms on beowulf-type clusters, supercomputers and Graphics Processing Units (GPUs). We present results from: 1) A parallel algorithm for time delay estimation on supernovae data by using artificial neural networks, in particular the General Regression Neural Networks model (GRNN). 2) A performance analysis of GRNN with two approaches: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) language on GPUs. We use real and artificial data to test the performance of algorithms. We analyse data from the quasar Q0957+561, which is the most studied so far. 3) A multiobjective optimisation approach for time delay estimation with multiple data sets from the same quasar. The motivation of this research on time delay is because new projects for ambitious surveys like Large Synoptic Survey Telescope (LSST) and the SuperNova Acceleration Probe (SNAP) devoted to study dark matter are in development. Moreover, current surveys like The Sloan Digital Sky Survey (SDSS) and Sloan Lens ACS (SLACS) are generating a tremendous amount of large monitoring data sets. Therefore, time delay estimations become a big issue. However, it is important to have a methodology that allows estimating a time delay accurately, but efficiently. We have shown that the AI algorithms perform better than state-of-the-art methods, that is, they are accurate algorithms. Moreover, the AI algorithms estimate time delays in automatic way, so they can be linked to LSST, SNAP, SDSS and SLACS projects. Now the challenge is to find fast AI algorithms. Therefore this paper presents the ongoing research on grid computing, so one can run parallel algorithms on a grid infrastructure. We hope to obtain better results than beowulf clusters, supercomputers and GPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: