FastCollect: Offloading Generational Garbage Collection to Integrated GPUs

Abhinav, Rupesh Nasre
Indian Institute of Technology (BHU), Varanasi
International Conference on Compilers, Architecture and Synthesis for Embedded Systems (CASES), 2016

   title={FastCollect: Offloading Generational Garbage Collection to Integrated GPUs},

   author={Nasre, Rupesh},



Download Download (PDF)   View View   Source Source   



Generational Mark-Sweep Garbage Collection is a widely used garbage collection technique. However, the garbage collector has poor execution efficiency for large programs. Aggressive collection causes execution pauses in the program, while reducing the collection frequency leads to memory wastage. In this work, we develop FastCollect, a parallel version of the generational mark-sweep garbage collector running on a graphics processing unit (GPU). At the core of our parallel implementation lies a parallel Depth First Search using a space-efficient concurrent stack, which we develop for the young and the mature garbage collection. To further improve performance, (i) we reduce thread-divergence and improve loadbalancing by devising a distributed work-stealing approach, (ii) we optimize our garbage collection algorithm to reduce the number of atomic instructions, (iii) we exploit the memory hierarchy to design a hybrid stack, and (iv) we extract multiple adjacent objects simultaneously by exploiting vectorized memory accesses. We implemented FastCollect in Java Hotspot VM and evaluated our results by executing DaCapo benchmarks. FastCollect is 4-5x faster than the Parallel Hotspot garbage collector and 42% faster than a previous GPU implementation. In addition, while the existing GPU version requires memory linear in the number of compute units, FastCollect’s memory requirement is fixed and low. FastCollect not only improves execution time of garbage collection, but also relieves the CPU for improved user interaction.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

TwitterAPIExchange Object
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1477164782
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1477164782
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => 14QffL/dtwwgzgCWCR/96rdQpi0=

    [url] => https://api.twitter.com/1.1/users/show.json
Follow us on Facebook
Follow us on Twitter

HGPU group

2033 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: