16716

CUDA-API-wrappers: Thin C++-flavored wrappers for the CUDA runtime API

Eyal Rozenberg

@{,

}

nVIDIA’s Runtime API for CUDA is intended for use both in C and C++ code. As such, it uses a C-style API, the lower common denominator (with a few notable exceptions of templated function overloads).

This library of wrappers around the Runtime API is intended to allow us to embrace many of the features of C++ (including some C++11) for using the runtime API – but without reducing expressivity or increasing the level of abstraction (as in, e.g., the Thrust library). Using cuda-api-wrappers, you still have your devices, streams, events and so on – but they will be more convenient to work with in more C++-idiomatic ways.

 

Key features

  • All functions and methods throw exceptions on failure – no need to check return values (the exceptions carry the status information).
  • Judicious namespacing (and some internal namespace-like classes) for better clarity and for semantically grouping related functionality together.
  • There are proxy objects for devices, streams, events and so on, using RAII to relieve you of remembering to free or destroy resources.
  • Various Plain Old Data structs adorned with convenience methods and operators.
  • Aims for clarity and straightforwardness in naming and semantics, so that you don’t need to refer to the official documentation to understand what each class and function do.
  • Thin and lightweight:
    No work done behind your back, no caches or indices or any such thing.
    No costly inheritance structure, vtables, virtual methods and so on – vanishes almost entirely on compilation
    Doesn’t “hide” any of CUDA’s complexity or functionality; it only simplifies use of the Runtime API.
VN:F [1.9.22_1171]
Rating: 4.0/5 (5 votes cast)
CUDA-API-wrappers: Thin C++-flavored wrappers for the CUDA runtime API, 4.0 out of 5 based on 5 ratings

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1484852864
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1484852864
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => OYsyAmzf9y0ojxyTdJoet8bHN0M=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook
Follow us on Twitter

HGPU group

2134 peoples are following HGPU @twitter

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: