Deep Learning Interpretability

For Software Engineering

According to Molnar, we can envision 5 different purposes (or in a practical scenario “applications”) of Machine Learning Interpretability. It is necessary to distiguish each purpose since they might require a different technique or approach to address them. These common “applications” are:

  1. Debugging a model
  2. Making stakeholders trust the model
  3. Auditing
  4. Offering recourse
  5. Generating insights (explanations)

We can extend this applications to the intersection of deep learning and software engineeering (DL4SE). The goal of this blog is to expose how interpretability is useful for SE deep learning models and what are the most practical scenearios in which we can apply interpretability.

First Application: Debugging a model

Second Application: Trust a Model

Third Application: Auditing

Fourth Application: Offering Recourse

Fifth Application: Generating Insights (Explanations)

Citation

@misc{palacio2023dl4seinterpretability,
    title={Deep Learning Interpretability for SE},
    author={David N. Palacio},
    year={2023},
    archivePrefix={arXiv},
    primaryClass={cs.SE}
}