3436

MARC: A Many-Core Approach to Reconfigurable Computing

Ilia Lebedev, Shaoyi Cheng, Austin Doupnik, James Martin, Christopher Fletcher, Daniel Burke, Mingjie Lin, John Wawrzynek
Dept. of EECS, Univ. of California at Berkeley, Berkeley, CA, USA
International Conference on Reconfigurable Computing and FPGAs (ReConFig), 2010

@conference{lebedev2010marc,

   title={MARC: A Many-Core Approach to Reconfigurable Computing},

   author={Lebedev, I. and Cheng, S. and Doupnik, A. and Martin, J. and Fletcher, C. and Burke, D. and Lin, M. and Wawrzynek, J.},

   booktitle={2010 International Conference on Reconfigurable Computing and FPGAs},

   pages={7–12},

   year={2010},

   organization={IEEE}

}

Source Source   

1365

views

We present a Many-core Approach to Reconfigurable Computing (MARC), enabling efficient high-performance computing for applications expressed using parallel programming models such as OpenCL. The MARC system exploits abundant special FPGA resources such as distributed block memories and DSP blocks to implement complete single-chip high efficiency many-core micro architectures. The key benefits of MARC are that it (i) allows programmers to easily express parallelism through the API defined in a high-level programming language, (ii) supports coarse-grain multithreading and dataflow-style fine-grain threading while permitting bit-level resource control, and (iii) greatly reduces the effort required to re-purpose the hardware system for different algorithms or different applications. A MARC prototype machine with 48 processing nodes was implemented using a Virtex-5 (XCV5LX155T-2) FPGA for a well known Bayesian network inference problem. We compare the runtime of the MARC machine against a manually optimized implementation. With fully synthesized application-specific processing cores, our MARC machine comes within a factor of 3 of the performance of a fully optimized FPGA solution but with a considerable reduction in development effort and a significant increase in retarget ability.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: