Accelerating Computational Finance Simulations with OpenCL

Michail Papadimitriou, Joris Cramwinckel, Ana Lucia Varbanescu
Delft University of Technology, The Netherlands
Fifth International Workshop on Multicore Software Engineering (IWMSE), 2016

   title={Accelerating Computational Finance Simulations with OpenCL},

   author={Papadimitriou, Michail and Cramwinckel, Joris and Varbanescu, Ana Lucia}


Download Download (PDF)   View View   Source Source   



Computational finance is a domain, where performance is in high demand. Therefore, we investigate the suitability of two families of accelerators for computational finance simulations. Specifically, we use a scenario-based ALM (Asset Liability Management) model and design a suitable OpenCL implementation. We further improve the performance of the application by applying several typical optimization techniques (data layout and data locality improvement, loop unrolling). Then, we compare the performance of the resulting parallel ALM kernel on a regular Xeon processor, on the Xeon Phi, and on an NVIDIA GPU. Eventually, we compare the results and discuss the performance portability of our implementation. Our results show that the optimized OpenCL code deployed on the Phi can run up to 135x faster than the original scalar code. In addition, OpenCL can be up to 10x faster than the OpenMP implementation on the same Xeon Phi. Despite these improved results, Xeon Phi is only 2-3x times faster than the regular CPU when using the same OpenCL code, and it is outperformed by almost an order of magnitude by the GPU. We conclude that ALM is an excellent target for acceleration. In this context, our results are significant for the computational finance specialists, as they enable a major increase in model accuracy.
VN:F [1.9.22_1171]
Rating: 3.4/5 (5 votes cast)
Accelerating Computational Finance Simulations with OpenCL, 3.4 out of 5 based on 5 ratings

* * *

* * *

TwitterAPIExchange Object
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1477009518
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1477009518
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => /ClY8Du5MTnjTqJGEj+9x5JHZf0=

    [url] => https://api.twitter.com/1.1/users/show.json
Follow us on Facebook
Follow us on Twitter

HGPU group

2034 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: