What is it about?

Learning with Noisy labels (LNL) poses a significant challenge for the Machine Learning community. Some of the most widely used approaches that select as clean samples for which the model itself (the in-training model) has high confidence, e.g., 'small loss', can suffer from the so called 'self-confirmation' bias. This bias arises because the in-training model, is at least partially trained on the noisy labels. Furthermore, in the classification case, an additional challenge arises because some of the label noise is between classes that are visually very similar ('hard noise'). This paper addresses these challenges by proposing a method (CLIPCleaner) that leverages CLIP, a powerful Vision-Language (VL) model for constructing a zero-shot classifier for efficient, offline, clean sample selection. This has the advantage that the sample selection is decoupled from the in-training model and that the sample selection is aware of the semantic and visual similarities between the classes due to the way that CLIP is trained.

Featured Image

Why is it important?

Compared to current methods that combine iterative sample selection with various techniques, CLIPCleaner offers a simple, single-step approach that achieves competitive or superior performance on benchmark datasets. To the best of our knowledge, this is the first time a VL model has been used for sample selection to address the problem of Learning with Noisy Labels (LNL), highlighting their potential in the domain.

Perspectives

I feel very fulfilled working on this paper because it brings together two of the areas that I most love working on: weakly-supervised learning and the unique power of vision-language models—particularly CLIP. This idea of decoupling sample selection from training clicked in my mind as an effective way of reducing self-confirmation biases, but at the same time, a very exciting application of vision-language models beyond original tasks. I seriously hope this piece of work will further change people's thinking and create more provocative ideas on the noisy labels issue. I want to provide the machine learning community with an easy but powerful tool that will enhance robustness for difficult classification tasks by using CLIPCleaner.

Chen Feng
University College London

Read the Original

This page is a summary of: CLIPCleaner: Cleaning Noisy Labels with CLIP, October 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3664647.3680664.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page