hgpu.org » Apple M2 Max
Dahua Feng, Zhiming Xu, Rongxiang Wang, Felix Xiaozhu Lin
Tags: AI, Apple M2 Max, Apple M2 Pro, Apple M2 Ultra, Computer science, CUDA, Linear Algebra, LLM, Machine learning, nVidia, nVidia GeForce RTX 4090, nVidia GeFroce RTX 2080 Ti, nVidia Quadro RTX 4000, nVidia RTX A6000, Performance, PyTorch
February 3, 2025 by hgpu
Recent source codes
* * *
Most viewed papers (last 30 days)
- Performance Portable Gradient Computations Using Source Transformation
- ConTraPh: Contrastive Learning for Parallelization and Performance Optimization
- Block: Balancing Load in LLM Serving with Context, Knowledge and Predictive Scheduling
- Understanding the Landscape of Ampere GPU Memory Errors
- Geak: Introducing Triton Kernel AI Agent & Evaluation Benchmarks
- SIGMo: High-Throughput Batched Subgraph Isomorphism on GPUs for Molecular Matching
- GBOTuner: Autotuning of OpenMP Parallel Codes with Bayesian Optimization and Code Representation Transfer Learning
- DGEMM without FP64 Arithmetic - using FP64 Emulation and FP8 Tensor Cores with Ozaki Scheme
- Luthier: Bridging Auto-Tuning and Vendor Libraries for Efficient Deep Learning Inference
- OpenDwarfs 2025: Modernizing the OpenDwarfs Benchmark Suite for Heterogeneous Computing
* * *