What is it about?

Hypergraphs are the natural representation of a broad range of systems where group (or high-order or many-to-many) relationships exist among their interacting parts. In this survey, we review the newly born hypergraph representation learning problem, whose goal is to learn a function to project objects - most commonly nodes - of an input hyper-network into a latent space such that both the structural and relational properties of the network can be encoded and preserved.

Featured Image

Why is it important?

Hypergraphs have attracted increasing attention in recent years thanks to their flexibility in naturally modeling a broad range of systems since hyperedges can model (possibly indecomposable) group interactions that cannot be described simply in terms of dyads (and, hence, via graphs). In this context, the task of hypergraph representation learning (a.k.a. hypergraph embedding) further assumes a critical role in effectively and efficiently solving analytic problems. The underlying idea of this procedure is that representing the nodes and hyperedges as a set of low-dimensional vectors allows the efficient execution of the traditional vector-based machine learning algorithms on the hypergraph.

Read the Original

This page is a summary of: A Survey on Hypergraph Representation Learning, ACM Computing Surveys, June 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3605776.
You can read the full text:

Read

Contributors

The following have contributed to this page