Precision oncology: a review to assess interpretability in several explainable methods
Keywords: 
Assignment problem
Drug recommendation
Explainable artificial intelligence
Interpretability
Machine learning
Method comparison
Precision medicine
Issue Date: 
2023
Publisher: 
Oxford University Press
Project: 
info:eu-repo/grantaAgreement/AEI/Proyectos I+D/PID2019-110344RB-I00/[ES]/NUEVA APROXIMACION COMPUTACIONAL PARA PREDECIR LETALIDAD SINTETICA EN CANCER
ISSN: 
1477-4054
Note: 
This is an Open Access article distributed under the terms of the Creative Commons Attribution License
Citation: 
Gimeno, M. (Marian); Sada-del-Real, K. (Katyna); Rubio, A. (Ángel). "Precision oncology: a review to assess interpretability in several explainable methods". Briefings in Bioinformatics. 24 (4), 2023,
Abstract
Great efforts have been made to develop precision medicine-based treatments using machine learning. In this field, where the goal is to provide the optimal treatment for each patient based on his/her medical history and genomic characteristics, it is not sufficient to make excellent predictions. The challenge is to understand and trust the model's decisions while also being able to easily implement it. However, one of the issues with machine learning algorithms-particularly deep learning-is their lack of interpretability. This review compares six different machine learning methods to provide guidance for defining interpretability by focusing on accuracy, multi-omics capability, explainability and implementability. Our selection of algorithms includes tree-, regression- and kernel-based methods, which we selected for their ease of interpretation for the clinician. We also included two novel explainable methods in the comparison. No significant differences in accuracy were observed when comparing the methods, but an improvement was observed when using gene expression instead of mutational status as input for these methods. We concentrated on the current intriguing challenge: model comprehension and ease of use. Our comparison suggests that the tree-based methods are the most interpretable of those tested.

Files in This Item:
Thumbnail
File
bbad200.pdf
Description
Size
2.1 MB
Format
Adobe PDF


Statistics and impact
0 citas en
0 citas en

Items in Dadun are protected by copyright, with all rights reserved, unless otherwise indicated.