What is it about?

Large language models (LLMs) frequently contend with significant requirements for generation and real-time responses. For instance, OpenAI's ChatGPT registers an average of 15 billion monthly accesses and approximately 270 million daily consultations. Despite this, the computational efficiency robustness of LLMs has been largely overlooked. This paper unveils potential computational efficiency vulnerabilities within the LLMs domain and introduces LLMEffiChecker. This tool aims to comprehensively assess the computational efficiency robustness of LLMs in both white-box and black-box scenarios.

Featured Image

Why is it important?

Our research has pioneered the identification of computational efficiency vulnerabilities within large language models, marking the initial endeavor to comprehend and test their computational efficiency robustness.

Perspectives

I hope this paper can offer new insights and strategies for research in the field of large language models.

Xiaoning Feng
Taiyuan University of Technology

Read the Original

This page is a summary of: LLMEffiChecker : Understanding and Testing Efficiency Degradation of Large Language Models, ACM Transactions on Software Engineering and Methodology, August 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3664812.
You can read the full text:

Read

Contributors

The following have contributed to this page