What is it about?

What is it about? The paper can summarized as follows: The irrational exuberance behind the machine learning methods for solving genuinely hard problems should be tempered. More specifically, there are numerous attempts to use machine learning for solving problems, specifically challenging optimization problems, which are known to be insurmountable for more conventional methods. Experiments, put forward by the proponents of the machine learning methods seemingly suggest that these methods provide a noticeable advantage over the traditional methods. Counter arguments were put forward, however, to argue that appropriate benchmarks were not adopted in those publications. Rather than debating the merits of these arguments and counter-arguments, as well as the adequacy of these benchmarks, the present publication elicits very explicit barriers, that any methods for solving these optimization problem has to face, machine learning or otherwise. Sadly, it furthermore shows that the methods proposed by the machine learning proponents suffer from the same limitations that more traditional approaches are known to suffer, and thus such methods provably cannot offer any advantage. To put it blunt, metal can't be turned to gold, and no machine learning tool can change that.

Featured Image

Why is it important?

Machine learning and artificial intelligence is undoubtedly the most important technological innovation of our days. We see it throughout our daily life experiences repeatedly. Having said that, we should not expect from ML/AI methods to deliver any miracles. Problems which exhibit insurmountable known barriers for classical methods, do apply to the machine learning methods as well. The present paper clarifies these barriers in very explicit ways.

Perspectives

I would like to take this opportunity to thank an anonymous reviewer, with whom I clearly share a lot of philosophical reservations and skepticism about the “omnipotence” of the machine learning and AI methods to solve any problem in the world! The reviewer is clearly an extremely knowledgeable person in the field, who advocates rigor, precision and meticulous approach to science, which, sadly, is so often sacrificed for the sake of publicity and hype. I am very indebted to this reviewer for very constructive comments and criticism.

David Gamarnik
Massachusetts Institute of Technology

Read the Original

This page is a summary of: Barriers for the performance of graph neural networks (GNN) in discrete random structures, Proceedings of the National Academy of Sciences, November 2023, Proceedings of the National Academy of Sciences,
DOI: 10.1073/pnas.2314092120.
You can read the full text:

Read

Contributors

The following have contributed to this page