What is it about?
Graph Neural Networks (GNNs) are essential for analyzing complex relationships, such as social networks or fraud patterns. However, as graphs grow to a "super-large" scale—with hundreds of billions of connections—they typically require massive, expensive server clusters and often suffer from "neighbor explosion," which exhausts memory and slows down computation. Our paper introduces LPS-GNN, a framework designed to make massive-scale graph learning affordable and efficient. At its heart is LPMetis, a new partitioning algorithm that combines the lightning speed of label propagation with the balanced partitioning of traditional methods. By using a clever "subgraph augmentation" strategy, LPS-GNN allows a single, standard GPU to train on a graph with 100 billion edges in just 10 hours. Essentially, we have built a "high-performance engine" that enables everyday hardware to process the world's largest digital networks without sacrificing accuracy.
Featured Image
Photo by Julia Kadel on Unsplash
Why is it important?
This work bridges the gap between academic innovation and large-scale industrial application. Successfully deployed on the Tencent platform, LPS-GNN has already demonstrated a 13.8% improvement in real-world user acquisition scenarios. The significance of this research is threefold: Cost Efficiency: It demonstrates that large-scale AI tasks don't always require large budgets, enabling organizations to process massive datasets with minimal hardware. High Compatibility: The framework is flexible and can be integrated with various existing GNN algorithms, making it a versatile tool for the research community. Proven Performance: By outperforming state-of-the-art models on both public and real-world datasets, it sets a new benchmark for efficiency and accuracy in graph mining.
Perspectives
As a researcher, I’ve seen many elegant GNN models fail when they meet the 'wall' of real-world data scale. The most rewarding part of developing LPS-GNN was proving that we don't always need more hardware to solve bigger problems; we need smarter algorithms. Seeing our framework process a 100-billion-edge graph—the scale of a massive gaming social network—on a single GPU was a defining moment. By bridging the gap between theoretical GNNs and the extreme demands of industrial deployment at Tencent, we hope to make 'super-large' graph learning more accessible and sustainable for the entire community.
Xu Cheng
Tsinghua University
Read the Original
This page is a summary of: LPS-GNN : Deploying Graph Neural Networks on Graphs with 100-Billion Edges, ACM Transactions on Knowledge Discovery from Data, March 2026, ACM (Association for Computing Machinery),
DOI: 10.1145/3801100.
You can read the full text:
Resources
Contributors
The following have contributed to this page







