hgpu.org » nVidia GeForce GT 730
Nicolas Weber
Tags: Computer science, CUDA, nVidia, nVidia GeForce GT 440, nVidia GeForce GT 620, nVidia GeForce GT 730, nVidia GeForce GTX 1080, nVidia GeForce GTX 480, nVidia GeForce GTX 560 Ti, nVidia GeForce GTX 570, nVidia GeForce GTX 590, nVidia GeForce GTX 680, nVidia GeForce GTX 780, nVidia GeForce GTX 980, nVidia GeForce GTX Titan X, Performance, performance portability, Tesla C2070, Tesla K20, Thesis
August 8, 2017 by hgpu
Alberto Garcia-Garcia
Tags: CNN, Computer science, CUDA, Deep learning, Neural networks, nVidia, nVidia GeForce GT 730, nVidia GeForce GTX Titan X, Tesla K40, Thesis
September 10, 2016 by hgpu
W. B. Langdon, Brian Yee Hong Lam
Tags: Algorithms, Benchmarking, Biology, CUDA, Genomics, nVidia, nVidia GeForce GT 730, Package, Tesla K20, Tesla K40, Tesla K80
June 1, 2015 by hgpu
Recent source codes
* * *
Most viewed papers (last 30 days)
- StitchCUDA: An Automated Multi-Agents End-to-End GPU Programing Framework with Rubric-based Agentic Reinforcement Learning
- Diagnosing FP4 inference: a layer-wise and block-wise sensitivity analysis of NVFP4 and MXFP4
- CUDA Agent: Large-Scale Agentic RL for High-Performance CUDA Kernel Generation
- Catalyst-Agent: Autonomous heterogeneous catalyst screening and optimization with an LLM Agent
- Architecture-Aware LLM Inference Optimization on AMD Instinct GPUs: A Comprehensive Benchmark and Deployment Study
- EvoScientist: Towards Multi-Agent Evolving AI Scientists for End-to-End Scientific Discovery
- Joint Training on AMD and NVIDIA GPUs
- Practical FP4 Training for Large-Scale MoE Models on Hopper GPUs
- CUDABench: Benchmarking LLMs for Text-to-CUDA Generation
- CodeScaler: Scaling Code LLM Training and Test-Time Inference via Execution-Free Reward Models
* * *




