Explainability techniques applied to road traffic forecasting using Graph Neural Network models
Explainability techniques applied to road traffic forecasting using Graph Neural Network models
Javier García-Sigüenza, Faraón Llorens-Largo, Leandro Tortosa and José F. Vicent
Information Sciences
Volume 645, October 2023, 119320
doi: doi.org/10.1016/j.ins.2023.119320
Available online 16 June 2023
(INS 119320)
https://www.sciencedirect.com/science/article/pii/S0020025523009052
Abstract
In recent years, several new Artificial Intelligence methods have been developed to make models more explainable and interpretable. The techniques essentially deal with the implementation of transparency and traceability of black box machine learning methods. Black box refers to the inability to explain why the model turns the input into the output, which may be problematic in some fields. To overcome this problem, our approach provides a comprehensive combination of predictive and explainability techniques. Firstly, we compared statistical regression, classic machine learning and deep learning models, reaching the conclusion that models based on deep learning exhibit greater accuracy. Of the great variety of deep learning models, the best predictive model in spatio-temporal traffic datasets was found to be the Adaptive Graph Convolutional Recurrent Network. Regarding the explainability technique, GraphMask shows a notably higher fidelity metric than other methods. The integration of both techniques was tested by means of experimental results, concluding that our approach improves deep learning model accuracy, making such models more transparent and interpretable. It allows us to discard up to 95% of the nodes used, facilitating an analysis of its behavior and thus improving the understanding of the model.
Keywords: Graph neural networks, deep learning, data analysis, explainability, traffic flow