hgpu.org » nVidia GeFroce RTX 2080 Ti
Dahua Feng, Zhiming Xu, Rongxiang Wang, Felix Xiaozhu Lin
Tags: AI, Apple M2 Max, Apple M2 Pro, Apple M2 Ultra, Computer science, CUDA, Linear Algebra, LLM, Machine learning, nVidia, nVidia GeForce RTX 4090, nVidia GeFroce RTX 2080 Ti, nVidia Quadro RTX 4000, nVidia RTX A6000, Performance, PyTorch
February 3, 2025 by hgpu
Nina Herrmann, Herbert Kuchen
Tags: Computer science, CUDA, Distributed computing, Heterogeneous systems, nVidia, nVidia GeForce GTX 750 Ti, nVidia GeFroce RTX 2080 Ti, nVidia Quadro K620
January 15, 2023 by hgpu
Recent source codes
* * *
Most viewed papers (last 30 days)
- Acceleration as a Service (XaaS) Source Containers
- Exploring SYCL as a Portability Layer for High-Performance Computing on CPUs
- All You Need Is Binary Search! A Practical View on Lightweight Database Indexing on GPUs
- CUDA-LLM: LLMs Can Write Efficient CUDA Kernels
- Engineering Supercomputing Platforms for Biomolecular Applications
- chemtrain-deploy: A parallel and scalable framework for machine learning potentials in million-atom MD simulations
- LiteGD: Lightweight and dynamic GPU Dispatching for Large-scale Heterogeneous Clusters
- A First Look at Bugs in LLM Inference Engines
- MemAscend: System Memory Optimization for SSD-Offloaded LLM Fine-Tuning
- HPCTransCompile: An AI Compiler Generated Dataset for High-Performance CUDA Transpilation and LLM Preliminary Exploration
* * *