What is it about?
This study addresses the critical need for testing Convolutional Neural Networks (CNNs) in security-sensitive scenarios, focusing on generating error-inducing inputs for black-box CNN models with limited testing budgets. The proposed Interpretable Analysis-based Transferable Test (IATT) method generates high-quality transferable test inputs by leveraging interpretation techniques to identify important regions in images and adding targeted perturbations to these areas. Experimental results on multiple deep learning models and Google Cloud Vision demonstrate that IATT significantly outperforms baseline methods, achieving 18.1%–52.7% higher Error-inducing Success Rates (ESR) while maintaining high input realism.
Featured Image
Photo by BoliviaInteligente on Unsplash
Why is it important?
This work addresses the critical challenge of testing black-box CNN models in security-sensitive scenarios where internal information is inaccessible. It proposes a novel method, IATT, to generate high-quality transferable test inputs that enhance error detection without compromising input realism. By significantly improving the Error-inducing Success Rate (ESR) compared to existing methods, this approach contributes to ensuring the reliability and robustness of CNN models, which is essential for their safe deployment in real-world applications.
Perspectives
Read the Original
This page is a summary of: IATT: Interpretation Analysis based Transferable Test Generation for Convolutional Neural Networks, ACM Transactions on Software Engineering and Methodology, November 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3705301.
You can read the full text:
Resources
Contributors
The following have contributed to this page