hgpu.org » Apple M2 Max
Dahua Feng, Zhiming Xu, Rongxiang Wang, Felix Xiaozhu Lin
Tags: AI, Apple M2 Max, Apple M2 Pro, Apple M2 Ultra, Computer science, CUDA, Linear Algebra, LLM, Machine learning, nVidia, nVidia GeForce RTX 4090, nVidia GeFroce RTX 2080 Ti, nVidia Quadro RTX 4000, nVidia RTX A6000, Performance, PyTorch
February 3, 2025 by hgpu
Recent source codes
* * *
Most viewed papers (last 30 days)
- CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning
- Accurate Models of NVIDIA Tensor Cores
- TritonForge: Profiling-Guided Framework for Automated Triton Kernel Optimization
- PEAK: A Performance Engineering AI-Assistant for GPU Kernels Powered by Natural Language Transformations
- cuPilot: A Strategy-Coordinated Multi-agent Framework for CUDA Kernel Evolution
- Tilus: A Tile-Level GPGPU Programming Language for Low-Precision Computation
- Beyond Code Pairs: Dialogue-Based Data Generation for LLM Code Translation
- Hybrid Learning and Optimization-Based Dynamic Scheduling for DL Workloads on Heterogeneous GPU Clusters
- BoltzGen:Toward Universal Binder Design
- AccelOpt: A Self-Improving LLM Agentic System for AI Accelerator Kernel Optimization
* * *



