What is it about?

Custom floating-point formats can be designed that take very little space, yet provide all the required functionality for a specific application. We show how to construct such formats that lead to exact results of graph analytics, and show how to emulate them efficiently on standard hardware.

Featured Image

Why is it important?

Narrow floating-point formats have gained attention in the context of machine learning, where several 8-bit and 16-bit floating-point formats have been proposed. There is no consensus on which is "best". Indeed, we conjecture that the formats must be application-specific, i.e., each application potentially requires its own format, because it needs to represent a specific range of numbers. By designing the format appropriately, we can guarantee precise numeric results.

Perspectives

Several papers have been published that show that convergence can be achieved for applications like PageRank when using narrow floating-point formats judiciously. No prior work has demonstrated actual performance speedups on CPUs (often the use of FPGAs has been conjectured to implement the floating-point number system); there is one prior work by Enrique Quintana-Orit, Hartwig Anzt and others that demonstrates a similar approach on GPUs, however, it is constrained in the floating-point format that can be used.

Hans Vandierendonck
Queen's University Belfast

Read the Original

This page is a summary of: Software-defined floating-point number formats and their application to graph processing, June 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3524059.3532360.
You can read the full text:

Read

Contributors

The following have contributed to this page