18778

GraphVite: A High-Performance CPU-GPU Hybrid System for Node Embedding

Zhaocheng Zhu, Shizhen Xu, Meng Qu, Jian Tang
Mila – Quebec AI Institute, Universite de Montreal
arXiv:1903.00757 [cs.LG], (2 Mar 2019)

@misc{zhu2019graphvite,

   title={GraphVite: A High-Performance CPU-GPU Hybrid System for Node Embedding},

   author={Zhaocheng Zhu and Shizhen Xu and Meng Qu and Jian Tang},

   year={2019},

   eprint={1903.00757},

   archivePrefix={arXiv},

   primaryClass={cs.LG}

}

Download Download (PDF)   View View   Source Source   

363

views

Learning continuous representations of nodes is attracting growing interest in both academia and industry recently, due to their simplicity and effectiveness in a variety of applications. Most of existing node embedding algorithms and systems are capable of processing networks with hundreds of thousands or a few millions of nodes. However, how to scale them to networks that have tens of millions or even hundreds of millions of nodes remains a challenging problem. In this paper, we propose GraphVite, a high-performance CPU-GPU hybrid system for training node embeddings, by co-optimizing the algorithm and the system. On the CPU end, augmented edge samples are parallelly generated by random walks in an online fashion on the network, and serve as the training data. On the GPU end, a novel parallel negative sampling is proposed to leverage multiple GPUs to train node embeddings simultaneously, without much data transfer and synchronization. Moreover, an efficient collaboration strategy is proposed to further reduce the synchronization cost between CPUs and GPUs. Experiments on multiple real-world networks show that GraphVite is super efficient. It takes only about one minute for a network with 1 million nodes and 5 million edges on a single machine with 4 GPUs, and takes around 20 hours for a network with 66 million nodes and 1.8 billion edges. Compared to the current fastest system, GraphVite is about 50 times faster without any sacrifice on performance.
Rating: 4.3/5. From 4 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2019 hgpu.org

All rights belong to the respective authors

Contact us: