What is it about?

Traffic forecasting problems involve jointly modeling the non-linear spatio-temporal dependencies at different scales. While Graph Neural Network models have been effectively used to capture the non-linear spatial dependencies, capturing the dynamic spatial dependencies between the locations remains a major challenge. The errors in capturing such dependencies propagate in modeling the temporal dependencies between the locations, thereby severely affecting the performance of long-term predictions. While transformer-based mechanisms have been recently proposed for capturing the dynamic spatial dependencies, these methods are susceptible to fluctuations in data brought on by unforeseen events like traffic congestion and accidents. To mitigate these issues we propose an improvised Spatio-temporal parallel transformer (STPT) based model for traffic prediction that uses multiple adjacency graphs passed through a pair of coupled graph transformer-convolution network units, operating in parallel, to generate more noise-resilient embeddings. We conduct extensive experiments on 4 real-world traffic datasets and compare the performance of STPT with several state-of-the-art baselines, in terms of measures like RMSE, MAE, and MAPE. We find that using STPT improves the performance by around 10 − 34% as compared to the baselines. We also investigate the applicability of the model on other spatio-temporal data in other domains. We use a covid-19 dataset to predict the number of future occurrences in different regions from a given set of historical occurrences. The results demonstrate the superiority of our model for such datasets.

Featured Image

Why is it important?

Any government needs to make rules for traffic management based on traffic flow, and congestion on roads or in cities.

Read the Original

This page is a summary of: Spatio-Temporal Parallel Transformer based model for Traffic Prediction, ACM Transactions on Knowledge Discovery from Data, July 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3679017.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page