What is it about?
We consider a weakly supervised regression problem for large data sets with uncertainties. "Weakly" means that some labels are unknown or uncertain. This is a typical situation due to random noise, poor equipment, expensive measurements in terms of money and time, lack of human resources. To optimise the loss function, we use some advanced tricks, such as manifold regularisation and low-rank techniques. As a result, we speed up all matrix computations and reduce memory requirements. Fast algorithms allow us to process larger datasets faster and to get more accurate results and predictions.
Featured Image
Photo by National Cancer Institute on Unsplash
Why is it important?
Nowadays, machine learning (ML) theory and methods are rapidly developing and increasingly used in various fields of science and technology. An urgent problem remains a further improvement of ML methodology: the development of methods that allow obtaining accurate and reliable solutions in a reasonable time in conditions of noise distortions, large data size, and lack of training information. In many applications, only a small part of the data can be labeled, i.e., the values of the predicted feature are not provided for all data objects. In the case of a large amount of data and limited resources for its processing, some data objects can be inaccurately labeled.
Perspectives
Read the Original
This page is a summary of: Weakly Supervised Regression Using Manifold Regularization and Low-Rank Matrix Representation, January 2021, Springer Science + Business Media,
DOI: 10.1007/978-3-030-77876-7_30.
You can read the full text:
Resources
Contributors
The following have contributed to this page