hgpu.org » nVidia GeFroce RTX 2080 Ti
Dahua Feng, Zhiming Xu, Rongxiang Wang, Felix Xiaozhu Lin
Tags: AI, Apple M2 Max, Apple M2 Pro, Apple M2 Ultra, Computer science, CUDA, Linear Algebra, LLM, Machine learning, nVidia, nVidia GeForce RTX 4090, nVidia GeFroce RTX 2080 Ti, nVidia Quadro RTX 4000, nVidia RTX A6000, Performance, PyTorch
February 3, 2025 by hgpu
Nina Herrmann, Herbert Kuchen
Tags: Computer science, CUDA, Distributed computing, Heterogeneous systems, nVidia, nVidia GeForce GTX 750 Ti, nVidia GeFroce RTX 2080 Ti, nVidia Quadro K620
January 15, 2023 by hgpu
Recent source codes
* * *
Most viewed papers (last 30 days)
- Analyzing Modern NVIDIA GPU cores
- Hardware-Assisted Software Testing and Debugging for Heterogeneous Computing
- Advances in Semantic Patching for HPC-oriented Refactorings with Coccinelle
- TileLink: Generating Efficient Compute-Communication Overlapping Kernels using Tile-Centric Primitives
- PyGraph: Robust Compiler Support for CUDA Graphs in PyTorch
- GigaAPI for GPU Parallelization
- Large Language Model Powered C-to-CUDA Code Translation: A Novel Auto-Parallelization Framework
- Scalability Evaluation of HPC Multi-GPU Training for ECG-based LLMs
- Efficient allocation of image recognition and LLM tasks on multi-GPU system
- A Power-Efficient Scheduling Approach in a Cpu-Gpu Computing System by Thread-Based Parallel Programming
* * *