Surgical skill levels: Classification and analysis using deep neural network model and motion signals

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Background and Objectives: Currently, the assessment of surgical skills relies primarily on the observations of expert surgeons. This may be time-consuming, non-scalable, inconsistent and subjective. Therefore, an automated system that can objectively identify the actual skills level of a junior trainee is highly desirable. This study aims to design an automated surgical skills evaluation system. Methods: We propose to use a deep neural network model that can analyze raw surgical motion data with minimal preprocessing. A platform with inertial measurement unit sensors was developed and participants with different levels of surgical experience were recruited to perform core open surgical skills tasks. JIGSAWS a publicly available robot based surgical training dataset was used to evaluate the generalization of our deep network model. 15 participants (4 experts, 4 intermediates and 7 novices)were recruited into the study. Results: The proposed deep model achieved an accuracy of 98.2%. With comparison to JIGSAWS; our method outperformed some existing approaches with an accuracy of 98.4%, 98.4% and 94.7% for suturing, needle-passing, and knot-tying, respectively. The experimental results demonstrated the applicability of this method in both open surgery and robot-assisted minimally invasive surgery. Conclusions: This study demonstrated the potential ability of the proposed deep network model to learn the discriminative features between different surgical skills levels.

Original languageEnglish
Pages (from-to)1-8
Number of pages8
JournalComputer Methods and Programs in Biomedicine
Volume177
DOIs
Publication statusPublished - 1 Aug 2019

Keywords

  • Deep neural network
  • Hand motion signals
  • Surgical education
  • Surgical skill assessment

Cite this

@article{6a0e7981031a4207a7739ae4128eae77,
title = "Surgical skill levels: Classification and analysis using deep neural network model and motion signals",
abstract = "Background and Objectives: Currently, the assessment of surgical skills relies primarily on the observations of expert surgeons. This may be time-consuming, non-scalable, inconsistent and subjective. Therefore, an automated system that can objectively identify the actual skills level of a junior trainee is highly desirable. This study aims to design an automated surgical skills evaluation system. Methods: We propose to use a deep neural network model that can analyze raw surgical motion data with minimal preprocessing. A platform with inertial measurement unit sensors was developed and participants with different levels of surgical experience were recruited to perform core open surgical skills tasks. JIGSAWS a publicly available robot based surgical training dataset was used to evaluate the generalization of our deep network model. 15 participants (4 experts, 4 intermediates and 7 novices)were recruited into the study. Results: The proposed deep model achieved an accuracy of 98.2{\%}. With comparison to JIGSAWS; our method outperformed some existing approaches with an accuracy of 98.4{\%}, 98.4{\%} and 94.7{\%} for suturing, needle-passing, and knot-tying, respectively. The experimental results demonstrated the applicability of this method in both open surgery and robot-assisted minimally invasive surgery. Conclusions: This study demonstrated the potential ability of the proposed deep network model to learn the discriminative features between different surgical skills levels.",
keywords = "Deep neural network, Hand motion signals, Surgical education, Surgical skill assessment",
author = "Nguyen, {Xuan Anh} and Damir Ljuhar and Maurizio Pacilli and Nataraja, {Ramesh Mark} and Sunita Chauhan",
year = "2019",
month = "8",
day = "1",
doi = "10.1016/j.cmpb.2019.05.008",
language = "English",
volume = "177",
pages = "1--8",
journal = "Computer Methods and Programs in Biomedicine",
issn = "0169-2607",
publisher = "Elsevier",

}

TY - JOUR

T1 - Surgical skill levels

T2 - Classification and analysis using deep neural network model and motion signals

AU - Nguyen, Xuan Anh

AU - Ljuhar, Damir

AU - Pacilli, Maurizio

AU - Nataraja, Ramesh Mark

AU - Chauhan, Sunita

PY - 2019/8/1

Y1 - 2019/8/1

N2 - Background and Objectives: Currently, the assessment of surgical skills relies primarily on the observations of expert surgeons. This may be time-consuming, non-scalable, inconsistent and subjective. Therefore, an automated system that can objectively identify the actual skills level of a junior trainee is highly desirable. This study aims to design an automated surgical skills evaluation system. Methods: We propose to use a deep neural network model that can analyze raw surgical motion data with minimal preprocessing. A platform with inertial measurement unit sensors was developed and participants with different levels of surgical experience were recruited to perform core open surgical skills tasks. JIGSAWS a publicly available robot based surgical training dataset was used to evaluate the generalization of our deep network model. 15 participants (4 experts, 4 intermediates and 7 novices)were recruited into the study. Results: The proposed deep model achieved an accuracy of 98.2%. With comparison to JIGSAWS; our method outperformed some existing approaches with an accuracy of 98.4%, 98.4% and 94.7% for suturing, needle-passing, and knot-tying, respectively. The experimental results demonstrated the applicability of this method in both open surgery and robot-assisted minimally invasive surgery. Conclusions: This study demonstrated the potential ability of the proposed deep network model to learn the discriminative features between different surgical skills levels.

AB - Background and Objectives: Currently, the assessment of surgical skills relies primarily on the observations of expert surgeons. This may be time-consuming, non-scalable, inconsistent and subjective. Therefore, an automated system that can objectively identify the actual skills level of a junior trainee is highly desirable. This study aims to design an automated surgical skills evaluation system. Methods: We propose to use a deep neural network model that can analyze raw surgical motion data with minimal preprocessing. A platform with inertial measurement unit sensors was developed and participants with different levels of surgical experience were recruited to perform core open surgical skills tasks. JIGSAWS a publicly available robot based surgical training dataset was used to evaluate the generalization of our deep network model. 15 participants (4 experts, 4 intermediates and 7 novices)were recruited into the study. Results: The proposed deep model achieved an accuracy of 98.2%. With comparison to JIGSAWS; our method outperformed some existing approaches with an accuracy of 98.4%, 98.4% and 94.7% for suturing, needle-passing, and knot-tying, respectively. The experimental results demonstrated the applicability of this method in both open surgery and robot-assisted minimally invasive surgery. Conclusions: This study demonstrated the potential ability of the proposed deep network model to learn the discriminative features between different surgical skills levels.

KW - Deep neural network

KW - Hand motion signals

KW - Surgical education

KW - Surgical skill assessment

UR - http://www.scopus.com/inward/record.url?scp=85065533786&partnerID=8YFLogxK

U2 - 10.1016/j.cmpb.2019.05.008

DO - 10.1016/j.cmpb.2019.05.008

M3 - Article

VL - 177

SP - 1

EP - 8

JO - Computer Methods and Programs in Biomedicine

JF - Computer Methods and Programs in Biomedicine

SN - 0169-2607

ER -