hgpu.org » nVidia L20
Kaixuan Zhang, Yunfan Cui, Shuhao Zhang, Chutong Ding, Shiyou Qian, Luping Wang, Jian Cao, Guangtao Xue, Cheng Huang, Guodong Yang, Liping Zhang
Tags: Computer science, CUDA, Heterogeneous systems, Machine learning, nVidia, nVidia A100, nVidia A40, nVidia H100, nVidia H20, nVidia H200, nVidia H800, nVidia L20, nVidia L40, nVidia RTX 6000 Ada, Performance, Triton
January 25, 2026 by hgpu
Borui Wan, Gaohong Liu, Zuquan Song, Jun Wang, Yun Zhang, Guangming Sheng, Shuguang Wang, Houmin Wei, Chenyuan Wang, Weiqiang Lou, Xi Yang, Mofan Zhang, Kaihua Jiang, Cheng Ren, Xiaoyun Zhi, Menghan Yu, Zhe Nan, Zhuolin Zheng, Baoquan Zhong, Qinlong Wang, Huan Yu, Jinxin Chi, Wang Zhang, Yuhan Li, Zixian Du, Sida Zhao, Yongqiang Zhang, Jingzhe Tang, Zherui Liu, Chuan Wu, Yanghua Peng, Haibin Lin, Wencong Xiao, Xin Liu, Liang Xiang
Tags: AI, Computer science, CUDA, LLM, nVidia, nVidia L20
September 28, 2025 by hgpu
Recent source codes
A Safety Report on GPT-5.2, Gemini 3 Pro, Qwen3-VL, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5
* * *
Most viewed papers (last 30 days)
- DICE: Diffusion Large Language Models Excel at Generating CUDA Kernels
- BioAgent Bench: An AI Agent Evaluation Suite for Bioinformatics
- Accelerating Scientific Research with Gemini: Case Studies and Common Techniques
- Deep Kernel Fusion for Transformers
- SciDef: Automating Definition Extraction from Academic Literature with Large Language Models
- ProfInfer: An eBPF-based Fine-Grained LLM Inference Profiler
- Towards Automated Kernel Generation in the Era of LLMs
- Private LLM Inference on Consumer Blackwell GPUs: A Practical Guide for Cost-Effective Local Deployment in SMEs
- Improving HPC Code Generation Capability of LLMs via Online Reinforcement Learning with Real-Machine Benchmark Rewards
- PhysProver: Advancing Automatic Theorem Proving for Physics
* * *



