hgpu.org » nVidia GeForce RTX 3060
Monica Dessole, Jolly Chen, Axel Naumann
Tags: CUDA, nVidia, nVidia A100, nVidia GeForce RTX 3060, nVidia L4, oneAPI, Package, Performance, Physics, SYCL
December 10, 2023 by hgpu
Jacob O. Tørring, Ben van Werkhoven, Filip Petrovic, Floris-Jan Willemsen, Jirí Filipovic, Anne C. Elster
Tags: Auto-Tuning, Benchmarking, Computer science, CUDA, nVidia, nVidia GeForce RTX 2080 Ti, nVidia GeForce RTX 3060, nVidia GeForce RTX 3090, nVidia Titan RTX, Package, performance portability
March 19, 2023 by hgpu
Anna Fortenberry, Stanimire Tomov
Tags: Computer science, CUDA, Heterogeneous systems, Linear Algebra, Matrix multiplication, nVidia, nVidia GeForce RTX 3060, oneAPI, Package, performance portability
December 25, 2022 by hgpu
Recent source codes
RepoLaunch: Automating Build and Test Pipeline of Code Repositories on ANY Language and ANY Platform
RepoLaunch: Automating Build and Test Pipeline of Code Repositories on ANY Language and ANY Platform
* * *
Most viewed papers (last 30 days)
- DICE: Diffusion Large Language Models Excel at Generating CUDA Kernels
- Accelerating Scientific Research with Gemini: Case Studies and Common Techniques
- Deep Kernel Fusion for Transformers
- Improving HPC Code Generation Capability of LLMs via Online Reinforcement Learning with Real-Machine Benchmark Rewards
- SciDef: Automating Definition Extraction from Academic Literature with Large Language Models
- StitchCUDA: An Automated Multi-Agents End-to-End GPU Programing Framework with Rubric-based Agentic Reinforcement Learning
- Dr. Kernel: Reinforcement Learning Done Right for Triton Kernel Generations
- Inside VOLT: Designing an Open-Source GPU Compiler (Tool)
- Execution-Centric Characterization of FP8 Matrix Cores, Asynchronous Execution, and Structured Sparsity on AMD MI300A
- HetCCL: Accelerating LLM Training with Heterogeneous GPUs
* * *




