hgpu.org » nVidia GeForce GT 520
Kishore Kothapalli, Dip Sankar Banerjee, P. J. Narayanan, Surinder Sood, Aman Kumar Bahl, Shashank Sharma, Shrenik Lad, Krishna Kumar Singh, Kiran Matam, Sivaramakrishna Bharadwaj, Rohit Nigam, Parikshit Sakurikar, Aditya Deshpande, Ishan Misra, Siddharth Choudhary, Shubham Gupta
Tags: Computer science, Databases, Hybrid computing, Image processing, nVidia, nVidia GeForce GT 520, Sparse matrix, Tesla T10
March 14, 2013 by hgpu
W. Feng, H. Lin, T. Scogland, J. Zhang
Tags: ATI, ATI Radeon HD 5450, Benchmarking, Computer science, Heterogeneous systems, nVidia, nVidia GeForce GT 520, OpenCL, Tesla C2050
May 1, 2012 by hgpu
Raman Sehgal, A. K. Mohanty
Tags: Algorithms, CUDA, Jets, Nuclear physics, nVidia, nVidia GeForce GT 520, Physics
January 16, 2012 by hgpu
Recent source codes
* * *
Most viewed papers (last 30 days)
- Architecture-Aware LLM Inference Optimization on AMD Instinct GPUs: A Comprehensive Benchmark and Deployment Study
- AutoKernel: Autonomous GPU Kernel Optimization via Iterative Agent-Driven Search
- LLMQ: Efficient Lower-Precision LLM Training for Consumer GPUs
- CuTeGen: An LLM-Based Agentic Framework for Generation and Optimization of High-Performance GPU Kernels using CuTe
- DRTriton: Large-Scale Synthetic Data Reinforcement Learning for Triton Kernel Generation
- MobileKernelBench: Can LLMs Write Efficient Kernels for Mobile Devices?
- Mixed-precision numerics in scientific applications: survey and perspectives
- Triton-Sanitizer: A Fast and Device-Agnostic Memory Sanitizer for Triton with Rich Diagnostic Context
- SOL-ExecBench: Speed-of-Light Benchmarking for Real-World GPU Kernels Against Hardware Limits
- MegaTrain: Full Precision Training of 100B+ Parameter Large Language Models on a Single GPU
* * *



