{"id":3799,"date":"2011-05-04T11:51:31","date_gmt":"2011-05-04T11:51:31","guid":{"rendered":"http:\/\/hgpu.org\/?p=3799"},"modified":"2011-05-04T11:51:31","modified_gmt":"2011-05-04T11:51:31","slug":"a-57mw-embedded-mixed-mode-neuro-fuzzy-accelerator-for-intelligent-multi-core-processor","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=3799","title":{"rendered":"A 57mW embedded mixed-mode neuro-fuzzy accelerator for intelligent multi-core processor"},"content":{"rendered":"<p>Artificial intelligence (Al) functions are becoming important in smartphones, portable game consoles, and robots for such intelligent applications as object detection, recognition, and human-computer interfaces (HCI). Most of these functions are realized in software with neural networks (NN) and fuzzy systems (FS), but due to power and speed limitations, a hardware solution is needed. For example, software implementations of object-recognition algorithms like SIFT consume ~10W and ~1s delay even on a 2.4GHz PC CPU. Previously, GPGPUs or ASICs were used to realize Al functions. But GPGPUs just emulate NN\/FS with many processing elements to speed up the software, while still consuming a large amount of power. On the other hand, low-power ASICs have been mostly dedicated stand-alone processors, not suitable to be ported into many different systems. This paper presents a portable embedded neuro-fuzzy accelerator: the intelligent reconfigurable integrated system (IRIS), which realizes low power consumption and high-speed recognition, prediction and optimization for Al applications.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence (Al) functions are becoming important in smartphones, portable game consoles, and robots for such intelligent applications as object detection, recognition, and human-computer interfaces (HCI). Most of these functions are realized in software with neural networks (NN) and fuzzy systems (FS), but due to power and speed limitations, a hardware solution is needed. For [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[11,3],"tags":[117,229,1782,344,34],"class_list":["post-3799","post","type-post","status-publish","format-standard","hentry","category-computer-science","category-paper","tag-artificial-intelligence","tag-asic","tag-computer-science","tag-energy-efficient-computing","tag-neural-networks"],"views":3885,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/3799","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3799"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/3799\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3799"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3799"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3799"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}