Article Details

Title Incorporating Domain Knowledge for Learning Interpretable Features
Authors Melchior, Martin
Year 2022
Volume 8(2)
Abstract Deep Learning has seen an enormous success in the last years. In several application domains prediction models with remarkable accuracy could be trained, sometimes by using large datasets. Often, these models are configured with huge amounts of parameters and seen by domain experts as hard to understand black boxes and hence of less value or not trustworthy. As a result, we observe a claim for better interpretability in several application domains. This claim can also be seen to arise from the fact that the formulation of the underlying problems is not complete and certain important aspects are disregarded. Interpretability is required particularly in domains where high demands with respect to safety or fairness are posed or, for example, in natural sciences where the application of these techniques aims at knowledge discovery. Alternatively, the gap in problem formulation can be compensated by incorporating a priori domain knowledge into the model. In this article, we highlight the importance to further advance the techniques to support interpretability or the mechanisms to incorporate domain knowledge in the machine learning approaches. When transferring these techniques to the application domains, close collaboration with domain experts is indispensable.