Document details

Interpretability of a deep learning model for rodents brain semantic segmentation

Author(s): Matos, Leonardo Nogueira ; Rodrigues, Mariana Fontainhas ; Magalhães, Ricardo José Silva ; Alves, Victor ; Novais, Paulo

Date: 2019

Persistent ID: http://hdl.handle.net/1822/67308

Origin: RepositóriUM - Universidade do Minho

Subject(s): Deep Learning; Magnetic Resonance Imaging; Interpretability; Deep Learning Model; Engenharia e Tecnologia::Engenharia Médica


Description

In recent years, as machine learning research has become real products and applications, some of which are critical, it is recognized that it is necessary to look for other model evaluation mechanisms. The commonly used main metrics such as accuracy or F-statistics are no longer sufficient in the deployment phase. This fostered the emergence of methods for interpretability of models. In this work, we discuss an approach to improving the prediction of a model by interpreting what has been learned and using that knowledge in a second phase. As a case study we have used the semantic segmentation of rodent brain tissue in Magnetic Resonance Imaging. By analogy with what happens to the human visual system, the experiment performed provides a way to make more in-depth conclusions about a scene by carefully observing what attracts more attention after a first glance in en passant.

FCT - Fundação para a Ciência e Tecnologia within the Project Scope: UID/CEC/00319/2019. We gratefully acknowledge the support of the NVIDIA Corporation with their donation of a Titan V board used in this research.

Document Type Conference paper
Language English
Contributor(s) Universidade do Minho
facebook logo  linkedin logo  twitter logo 
mendeley logo

Related documents

No related documents