10138

PRAND: GPU accelerated parallel random number generation library: Using most reliable algorithms and applying parallelism of modern GPUs and CPUs

L.Yu. Barash, L.N. Shchur
Landau Institute for Theoretical Physics, 142432 Chernogolovka, Russia
arXiv:1307.5869 [physics.comp-ph], (22 Jul 2013)
@article{2013arXiv1307.5869B,

   author={Barash}, L.~Y. and {Shchur}, L.~N.},

   title={"{PRAND: GPU accelerated parallel random number generation library: Using most reliable algorithms and applying parallelism of modern GPUs and CPUs}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1307.5869},

   primaryClass={"physics.comp-ph"},

   keywords={Physics – Computational Physics, Computer Science – Mathematical Software},

   year={2013},

   month={jul},

   adsurl={http://adsabs.harvard.edu/abs/2013arXiv1307.5869B},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

983

views

The library PRAND for pseudorandom number generation for modern CPUs and GPUs is presented. It contains both single-threaded and multi-threaded realizations of a number of modern and most reliable generators recently proposed and studied in [1,2,3,4,5] and the efficient SIMD realizations proposed in [6]. One of the useful features for using PRAND in parallel simulations is the ability to initialize up to $10^{19}$ independent streams. Using massive parallelism of modern GPUs and SIMD parallelism of modern CPUs substantially improves performance of the generators.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1472335893
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1472335893
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => 9ynMPQgQuLLe+YdUj1+qN1vcqkU=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: