16006

NCAM: Near-Data Processing for Nearest Neighbor Search

Vincent T. Lee, Carlo C. del Mundo, Armin Alaghi, Luis Ceze, Mark Oskin, Ali Farhadi
University of Washington
arXiv:1606.03742 [cs.DC], (12 Jun 2016)

@article{lee2016ncam,

   title={NCAM: Near-Data Processing for Nearest Neighbor Search},

   author={Lee, Vincent T. and Mundo, Carlo C. del and Alaghi, Armin and Ceze, Luis and Oskin, Mark and Farhadi, Ali},

   year={2016},

   month={jun},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

1409

views

Deep down in many applications like natural language processing (NLP), vision, and robotics is a form of the k-nearest neighbor search algorithm (kNN). The kNN algorithm is primarily bottlenecked by data movement, limiting throughput and incurring latency in these applications. While there do exist well bounded kNN approximations that improve the performance of kNN, these algorithms trade-off accuracy and quickly degrade into linear search for high dimensionality. To address data movement, we designed the nearest neighbor content addressable memory (NCAM) which employs processing in-memory (PIM) to eliminate costly data transfers, and provides exact nearest neighbor search. NCAMs benefit from the modularity offered by 3D die-stacking technology which allows us to side-step the issues of direct integration with DRAM dies. The NCAM benefits from the higher density and speeds of emerging memory technologies and interfaces. We characterize a state-of-the-art software kNN implementation and expose the shortcomings of approximate kNN search. We present a full NCAM design and estimate its performance using post-placement and route estimates, and we show that its power characteristics are compatible with emerging memory substrates. We then evaluate energy efficiency and latency compared to modern multi-core CPUs and GPGPU platforms using parameters typical of mobile and server workloads. Our simulation results show that the NCAM can achieve up to 160x throughput per watt and three orders of magnitude latency improvement over a NVIDIA Titan X GPU for server workloads, and ~2413x throughput per watt and ~94.5x latency improvements over a multi-core Intel E5-2620 CPU. Finally, we show that the NCAM is not limited to just kNN and can be generalized to act as a content addressable memory (CAM) or ternary-CAM (TCAM).
Rating: 2.3/5. From 10 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: