28894

Binary Code Summarization: Benchmarking ChatGPT/GPT-4 and Other Large Language Models

Xin Jin, Jonathan Larson, Weiwei Yang, Zhiqiang Lin
The Ohio State University
arXiv:2312.09601 [cs.CR], (15 Dec 2023)

@misc{jin2023binary,

   title={Binary Code Summarization: Benchmarking ChatGPT/GPT-4 and Other Large Language Models},

   author={Xin Jin and Jonathan Larson and Weiwei Yang and Zhiqiang Lin},

   year={2023},

   eprint={2312.09601},

   archivePrefix={arXiv},

   primaryClass={cs.CR}

}

Download Download (PDF)   View View   Source Source   

593

views

Binary code summarization, while invaluable for understanding code semantics, is challenging due to its labor-intensive nature. This study delves into the potential of large language models (LLMs) for binary code comprehension. To this end, we present BinSum, a comprehensive benchmark and dataset of over 557K binary functions and introduce a novel method for prompt synthesis and optimization. To more accurately gauge LLM performance, we also propose a new semantic similarity metric that surpasses traditional exact-match approaches. Our extensive evaluation of prominent LLMs, including ChatGPT, GPT-4, Llama 2, and Code Llama, reveals 10 pivotal insights. This evaluation generates 4 billion inference tokens, incurred a total expense of 11,418 US dollars and 873 NVIDIA A100 GPU hours. Our findings highlight both the transformative potential of LLMs in this field and the challenges yet to be overcome.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: