hgpu.org » nVidia GeFofce GTX Titan X
Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, Xiaoqiang Zhang
Tags: Artificial intelligence, Computer science, CUDA, Deep learning, Heterogeneous systems, Machine learning, Neural networks, nVidia, nVidia GeFofce GTX Titan X, Package, Tesla K40
May 30, 2016 by hgpu
Recent source codes
* * *
Most viewed papers (last 30 days)
- Using Intel oneAPI for Multi-hybrid Acceleration Programming with GPU and FPGA Coupling
- 94% on CIFAR-10 in 3.29 Seconds on a Single GPU
- gpu_tracker: Python package for tracking and profiling GPU utilization in both desktop and high-performance computing environments
- Retargeting and Respecializing GPU Workloads for Performance Portability
- A Systematic Literature Survey of Sparse Matrix-Vector Multiplication
- Performance Portable Monte Carlo Particle Transport on Intel, NVIDIA, and AMD GPUs
- OpenMP offload at the Exascale using Intel GPU Max 1550: evaluation of STREAmS compressible solver
- QArray: a GPU-accelerated constant capacitance model simulator for large quantum dot arrays
- High Performance Privacy Preserving AI
- Python-Based Quantum Chemistry Calculations with GPU Acceleration
* * *