8238

Forecasting high frequency financial time series using parallel FFN with CUDA and ZeroMQ

Paola Arce, Cristian Maureira, Roberto Bonvallet, Cesar Fernandez
Center for Technological Innovation in High Performance Computing, Valparaiso, Chile
Center for Technological Innovation in High Performance Computing, 2012

@article{arce2012forecasting,

   title={Forecasting high frequency financial time series using parallel FFN with CUDA and ZeroMQ},

   author={Arce, P. and Maureira, C. and Bonvallet, R. and Fern{‘a}ndez, C.},

   year={2012}

}

Download Download (PDF)   View View   Source Source   

3086

views

Feed forward neural networks (FFNs) are powerful data-modelling tools that have been used in many fields of science. Specifically in financial applications, due to the number of factors affecting the market, models with a large quantity of input features, hidden and output neurons can be obtained. In financial problems, the response time is crucial and it is necessary to have faster applications. Most of the current applications have been implemented as non-parallel software running on serial processors. In this paper we present a parallel implementation of a FFN using GPU in order to reduce response time when new data arrives. The problem can be conveniently represented by matrix operations implemented using the CUBLAS library. It provides highly optimized linear algebra routines that take advantage of the hardware features of the GPU. The algorithm was developed in C++ and CUDA and all the input features were received using the ZeroMQ library, which was also used to publish the output features. ZeroMQ is an abstraction over system sockets that allows chunks of data to be efficiently sent therefore minimizing the overhead and system calls. The CUDA implementation was tested on a compute server with an NVIDIA M2050 GPU and an Intel Xeon X5650 2.67GHz CPU, on neural networks of increasing sizes. After comparing against a straightforward 24-thread CPU implementation using MKL, experiments show that, while still slower for small FFNs, for 1000x1000x1000 networks the GPU already outperforms the CPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: