hgpu.org » nVidia GeFroce RTX 2080 Ti
Dahua Feng, Zhiming Xu, Rongxiang Wang, Felix Xiaozhu Lin
Tags: AI, Apple M2 Max, Apple M2 Pro, Apple M2 Ultra, Computer science, CUDA, Linear Algebra, LLM, Machine learning, nVidia, nVidia GeForce RTX 4090, nVidia GeFroce RTX 2080 Ti, nVidia Quadro RTX 4000, nVidia RTX A6000, Performance, PyTorch
February 3, 2025 by hgpu
Nina Herrmann, Herbert Kuchen
Tags: Computer science, CUDA, Distributed computing, Heterogeneous systems, nVidia, nVidia GeForce GTX 750 Ti, nVidia GeFroce RTX 2080 Ti, nVidia Quadro K620
January 15, 2023 by hgpu
Recent source codes
* * *
Most viewed papers (last 30 days)
- Omniwise: Predicting GPU Kernels Performance with LLMs
- P4OMP: Retrieval-Augmented Prompting for OpenMP Parallelism in Serial Code
- Engineering Supercomputing Platforms for Biomolecular Applications
- GCStack+GCScaler: Fast and Accurate GPU Performance Analyses Using Fine-Grained Stall Cycle Accounting and Interval Analysis
- A First Look at Bugs in LLM Inference Engines
- Accelerated discovery and design of Fe-Co-Zr magnets with tunable magnetic anisotropy through machine learning and parallel computing
- Efficient GPU Implementation of Multi-Precision Integer Division
- ParEval-Repo: A Benchmark Suite for Evaluating LLMs with Repository-level HPC Translation Tasks
- No More Shading Languages: Compiling C++ to Vulkan Shaders
- WiLLM: An Open Wireless LLM Communication System
* * *