What is it about?
The simulation of fluid dynamics, typically by numerically solving partial differential equations, is an essential tool in many areas of science and engineering. However, the high computational cost can limit application in practice and may prohibit exploring large parameter spaces. Recent deep-learning approaches have demonstrated the potential to yield surrogate models for the simulation of fluid dynamics. While such models exhibit lower accuracy in comparison, their low runtime makes them appealing for design-space exploration. We introduce two novel graph neural network (GNN) models for extrapolating the time evolution of the fluid flow. In both models, previous states are processed through multiple coarsening of the graph, which enables faster information propagation through the network and improves the capture and forecast of the system state, particularly in problems encompassing phenomena spanning a range of length scales. Additionally, the second model is architecturally equivariant to rotations, which allows the network to learn the underlying physics more efficiently, leading to improved accuracy and generalization. We analyze these models using two canonical fluid models: advection and incompressible fluid dynamics.
Featured Image
Photo by Christoffer Engström on Unsplash
Why is it important?
The proposed GNN models can generalize from uniform advection fields to high-gradient fields on complex domains. The multi-scale graph architecture allows for inference of flows, within a range of Reynolds numbers and design parameters, more effectively than a baseline single-scale GNN. Simulations are between two and four orders of magnitude faster than the numerical solutions on which they were trained. This allows for real-time simulations and the exploration of large parameter-spaces.
Perspectives
Read the Original
This page is a summary of: Multi-scale rotation-equivariant graph neural networks for unsteady Eulerian fluid dynamics, Physics of Fluids, August 2022, American Institute of Physics,
DOI: 10.1063/5.0097679.
You can read the full text:
Contributors
The following have contributed to this page