Ten simple rules for reporting machine learning methods implementation and evaluation on biomedical data

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

There is a huge discrepancy in how researchers implement, evaluate, and report the performance of a machine learning method for classification or segmentation of biomedical data. Poor reporting and inadequate inferences are, however, not unusual to see in current literature. More specifically, vague aims and scope, missing details about the data, ambiguous preprocessing procedures, lack of clarity regarding the method's implementation, poor validation and testing, invalid comparisons between methods, and the absence of a clear rationale for performance metric choices are making it difficult to draw the right conclusions from many studies in the field. This report suggests 10 guidelines and principles that should be followed when reporting the implementation of a method and the evaluation of its performance in order to make the study transparent, interpretable, replicable, and useful. All stages of data processing and method's performance evaluation should be clearly described, and parameters and metric choices must be justified in order to aid readers in appreciating the performance of the method or in comparing it with other relevant methods. We feel that these guidelines are important for clear scientific communication in the field of biomedical data processing.

Original languageBritish English
Pages (from-to)5-11
Number of pages7
JournalInternational Journal of Imaging Systems and Technology
Volume32
Issue number1
DOIs
StatePublished - Jan 2022

Fingerprint

Dive into the research topics of 'Ten simple rules for reporting machine learning methods implementation and evaluation on biomedical data'. Together they form a unique fingerprint.

Cite this