hgpu.org » nVidia GeFroce RTX 2080 Ti
Dahua Feng, Zhiming Xu, Rongxiang Wang, Felix Xiaozhu Lin
Tags: AI, Apple M2 Max, Apple M2 Pro, Apple M2 Ultra, Computer science, CUDA, Linear Algebra, LLM, Machine learning, nVidia, nVidia GeForce RTX 4090, nVidia GeFroce RTX 2080 Ti, nVidia Quadro RTX 4000, nVidia RTX A6000, Performance, PyTorch
February 3, 2025 by hgpu
Nina Herrmann, Herbert Kuchen
Tags: Computer science, CUDA, Distributed computing, Heterogeneous systems, nVidia, nVidia GeForce GTX 750 Ti, nVidia GeFroce RTX 2080 Ti, nVidia Quadro K620
January 15, 2023 by hgpu
Recent source codes
A Safety Report on GPT-5.2, Gemini 3 Pro, Qwen3-VL, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5
* * *
Most viewed papers (last 30 days)
- DICE: Diffusion Large Language Models Excel at Generating CUDA Kernels
- Accelerating Scientific Research with Gemini: Case Studies and Common Techniques
- Deep Kernel Fusion for Transformers
- Improving HPC Code Generation Capability of LLMs via Online Reinforcement Learning with Real-Machine Benchmark Rewards
- SciDef: Automating Definition Extraction from Academic Literature with Large Language Models
- StitchCUDA: An Automated Multi-Agents End-to-End GPU Programing Framework with Rubric-based Agentic Reinforcement Learning
- Dr. Kernel: Reinforcement Learning Done Right for Triton Kernel Generations
- Inside VOLT: Designing an Open-Source GPU Compiler (Tool)
- Execution-Centric Characterization of FP8 Matrix Cores, Asynchronous Execution, and Structured Sparsity on AMD MI300A
- HetCCL: Accelerating LLM Training with Heterogeneous GPUs
* * *



