{"id":10410,"date":"2013-08-27T23:54:51","date_gmt":"2013-08-27T20:54:51","guid":{"rendered":"http:\/\/hgpu.org\/?p=10410"},"modified":"2013-08-27T23:54:51","modified_gmt":"2013-08-27T20:54:51","slug":"multiple-time-scales-recurrent-neural-network-for-complex-action-acquisition","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=10410","title":{"rendered":"Multiple Time Scales Recurrent Neural Network for Complex Action Acquisition"},"content":{"rendered":"<p>This paper presents novel results of complex action learning experiments based on the use of extended multiple time-scales recurrent neural networks (MTRNN). The experiments were carried out with the iCub humanoid robot, as a model of the developmental learning of motor primitives as the basis of sensorimotor and linguistic compositionality. The model was implemented through the Aquila cognitive robotics toolkit, which supports the CUDA architecture and makes use of massively parallel GPUs (graphics processing units). The results presented herein show that the model was able to learn and successfully reproduce multiple behavioural sequences of actions in an object manipulation task scenario using large-scale MTRNNs. This forms the basis on ongoing experiments on action and language compositionality.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This paper presents novel results of complex action learning experiments based on the use of extended multiple time-scales recurrent neural networks (MTRNN). The experiments were carried out with the iCub humanoid robot, as a model of the developmental learning of motor primitives as the basis of sensorimotor and linguistic compositionality. The model was implemented through [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[11,89,3],"tags":[1782,14,34,20,176],"class_list":["post-10410","post","type-post","status-publish","format-standard","hentry","category-computer-science","category-nvidia-cuda","category-paper","tag-computer-science","tag-cuda","tag-neural-networks","tag-nvidia","tag-package"],"views":3005,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/10410","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=10410"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/10410\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=10410"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=10410"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=10410"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}