{"id":10465,"date":"2013-09-07T20:46:20","date_gmt":"2013-09-07T17:46:20","guid":{"rendered":"http:\/\/hgpu.org\/?p=10465"},"modified":"2013-09-07T20:46:20","modified_gmt":"2013-09-07T17:46:20","slug":"d5-5-3-design-and-implementation-of-the-simd-mimd-gpu-architecture","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=10465","title":{"rendered":"D5.5.3 &#8211; Design and implementation of the SIMD-MIMD GPU architecture"},"content":{"rendered":"<p>To develop a new SIMD-MIMD architecture we first characterized GPGPU workloads using simple and well known workload metrics to identify the performance bottlenecks. We found that the benchmarks with branch divergence do not utilize the SIMD width optimally on conventional GPUs. We also studied the performance bottlenecks of motion compensation kernel developed in Task 3.2 and showed that increasing the maximum limit on CTA and shared memory can signi\fcantly increase in performance and save energy. We also studied the correlation between workload characteristics and GPU component power consumption. In addition we categorize the workload into high, medium, and low IPC category to study the power consumption behavior of each category. The results show a significant change in components power consumption across the three categories of kernels. We believe this is a vital information for computer architects and application programmers to prioritize the components for power and performance optimizations. Guided by this information we proposed a new architecture which can handle branch divergence e\u000eciently.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>To develop a new SIMD-MIMD architecture we first characterized GPGPU workloads using simple and well known workload metrics to identify the performance bottlenecks. We found that the benchmarks with branch divergence do not utilize the SIMD width optimally on conventional GPUs. We also studied the performance bottlenecks of motion compensation kernel developed in Task 3.2 [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[89,3],"tags":[14,67,1467],"class_list":["post-10465","post","type-post","status-publish","format-standard","hentry","category-nvidia-cuda","category-paper","tag-cuda","tag-performance","tag-power-efficient-computing"],"views":2714,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/10465","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=10465"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/10465\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=10465"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=10465"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=10465"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}