HEAD
Shapley Values – Explainable Machine Learning (Explainable Artificial Intelligence) – Dr. Andreas Joseph, Research Economist at Bank of England

We propose a generic workflow for the use of machine learning models to inform decision making and to communicate modelling results with stakeholders. It involves three steps: A comparative model evaluation, a decomposition of predicted values into feature contributions, and statistical inference on feature attributions. We use this workflow to forecast US unemployment one year ahead in a monthly dataset and find that universal function approximators, including random forests and neural networks, outperform conventional models. This better performance is associated with their greater flexibility in accounting for timevarying and nonlinear relationships in the data generating process. We use Shapley values to explain the predictions of the machine learning models and to identify the economically meaningful nonlinearities learned by the models, which allow us to make nuanced interpretations of model workings. Shapley regressions for statistical inference on machine learning models enable us to assess and communicate variable importance akin to conventional econometric approaches.

 

More infomation can be found in the folowing here