CASS: Nvidia to AMD Transpilation with Data, Models, and Benchmark
MBZUAI
arXiv:2505.16968 [cs.AR], (22 May 2025)
@misc{heakl2025cassnvidiaamdtranspilation,
title={CASS: Nvidia to AMD Transpilation with Data, Models, and Benchmark},
author={Ahmed Heakl and Sarim Hashmi and Gustavo Bertolo Stahl and Seung Hun Eddie Han and Salman Khan and Abdulrahman Mahmoud},
year={2025},
eprint={2505.16968},
archivePrefix={arXiv},
primaryClass={cs.AR},
url={https://arxiv.org/abs/2505.16968}
}
We introduce CASS, the first large-scale dataset and model suite for cross-architecture GPU code transpilation, targeting both source-level (CUDA<->HIP) and assembly-level (Nvidia SASS<->AMD RDNA3) translation. The dataset comprises 70k verified code pairs across host and device, addressing a critical gap in low-level GPU code portability. Leveraging this resource, we train the CASS family of domain-specific language models, achieving 95% source translation accuracy and 37.5% assembly translation accuracy, substantially outperforming commercial baselines such as GPT-4o, Claude, and Hipify. Our generated code matches native performance in over 85% of test cases, preserving runtime and memory behavior. To support rigorous evaluation, we introduce CASS-Bench, a curated benchmark spanning 16 GPU domains with ground-truth execution. All data, models, and evaluation tools are released as open source to foster progress in GPU compiler tooling, binary compatibility, and LLM-guided hardware translation. Dataset and benchmark are available.
May 25, 2025 by hgpu