hgpu.org » nVidia H20
Weinan Dai, Hanlin Wu, Qiying Yu, Huan-ang Gao, Jiahao Li, Chengquan Jiang, Weiqiang Lou, Yufan Song, Hongli Yu, Jiaze Chen, Wei-Ying Ma, Ya-Qin Zhang, Jingjing Liu, Mingxuan Wang, Xin Liu, Hao Zhou
Tags: Code generation, Computer science, CUDA, Deep learning, nVidia, nVidia H20, Package
March 4, 2026 by hgpu
Kaixuan Zhang, Yunfan Cui, Shuhao Zhang, Chutong Ding, Shiyou Qian, Luping Wang, Jian Cao, Guangtao Xue, Cheng Huang, Guodong Yang, Liping Zhang
Tags: Computer science, CUDA, Heterogeneous systems, Machine learning, nVidia, nVidia A100, nVidia A40, nVidia H100, nVidia H20, nVidia H200, nVidia H800, nVidia L20, nVidia L40, nVidia RTX 6000 Ada, Performance, Triton
January 25, 2026 by hgpu
Recent source codes
* * *
Most viewed papers (last 30 days)
- Architecture-Aware LLM Inference Optimization on AMD Instinct GPUs: A Comprehensive Benchmark and Deployment Study
- AutoKernel: Autonomous GPU Kernel Optimization via Iterative Agent-Driven Search
- LLMQ: Efficient Lower-Precision LLM Training for Consumer GPUs
- CuTeGen: An LLM-Based Agentic Framework for Generation and Optimization of High-Performance GPU Kernels using CuTe
- DRTriton: Large-Scale Synthetic Data Reinforcement Learning for Triton Kernel Generation
- MobileKernelBench: Can LLMs Write Efficient Kernels for Mobile Devices?
- Mixed-precision numerics in scientific applications: survey and perspectives
- Triton-Sanitizer: A Fast and Device-Agnostic Memory Sanitizer for Triton with Rich Diagnostic Context
- SOL-ExecBench: Speed-of-Light Benchmarking for Real-World GPU Kernels Against Hardware Limits
- MegaTrain: Full Precision Training of 100B+ Parameter Large Language Models on a Single GPU
* * *




