10619

Optimizing Real Time GPU Kernels Using Fuzzy Inference System

Deepali Shinde, Mithilesh Said, Pratik Shetty, Swapnil Gharat
Computer Engineering Dept., Rajiv Gandhi Institute of Technology, University of Mumbai, India
International Journal Of Advance Research In Science And Engineering (IJARSE), Vol. No.2, Issue No.9, 2013

@article{shinde2013optimizing,

   title={OPTIMIZING REAL TIME GPU KERNELS USING FUZZY INFERENCE SYSTEM},

   author={Shinde, Deepali and Said, Mithilesh and Shetty, Pratik and Gharat, Swapnil},

   year={2013}

}

Download Download (PDF)   View View   Source Source   

760

views

CPU technology is slowly reaching its threshold, however Moore’s Law still holds true for GPUs. With the increasing scope for GPGPU computing more and more applications are being ported to the GPU framework. One of the most suited application areas for GPGPU computing is image processing and computer vision. The high performance given by GPUs makes them ideal for real time applications. However, GPU technology gives optimum results when certain criteria related to degree of parallelism, image size and memory transfers are met. Very small images will consume more time in memory transfers between CPU and GPU than in computation on the GPU, while large images will affect the response time owing to the increased computation. It is necessary to strike a fine balance between the image size and the computation time. We propose to use Fuzzy Inference System to estimate the most suitable values for these parameters and to show the difference between CPU and GPU computing methods. Using these values from the FIS, a programmer can develop deeper insights into the performance of real-time systems using GPUs.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1481156005
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1481156005
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => V1R6uYLMiiElFJYgMfCSRXauaa0=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook
Follow us on Twitter

HGPU group

2080 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: