240 likes | 252 Views
Automatic Detection and Segmentation of Robot-Assisted Surgical Motions. Henry C. Lin, Dr. Izhak Shafran, Todd E. Murphy, Dr. David D. Yuh, Dr. Allison M. Okamura, Dr Gregory D. Hager. presented by Henry C. Lin. Authors. Expert. Intermediate. Motivation.
E N D
Automatic Detection and Segmentation of Robot-Assisted Surgical Motions Henry C. Lin, Dr. Izhak Shafran, Todd E. Murphy, Dr. David D. Yuh, Dr. Allison M. Okamura, Dr Gregory D. Hager presented by Henry C. Lin
Expert Intermediate Motivation Can we quantitatively and objectively determine which video is an expert surgeon and which is an intermediate surgeon? Can we automatically detect and segment the surgical motions common in both videos?
--- Pull suture with left hand --- Move to middle with needle Cartesian Position Plots - Left Manipulator Expert Surgeon - trial 4 Intermediate Surgeon - trial 22
Previous Work • Darzi, et al. Imperial College Surgical Assessment Device (ICSAD) quantified motion information by tracking electromagnetic markers on a trainee’s hands. • Rosen, et al. Used force/torque data from laparoscopic trainers to create a hidden Markov model task decomposition specific to each surgeon.
Goals • Train LDA-based statistical models with labeled motion data of an expert surgeon and an intermediate surgeon. • Be able to accurately parse unlabeled raw motion data into a labeled sequence of surgical gestures in an automatic and efficient way. • Ultimately create evaluation metrics to benchmark surgical skill.
Corpus 78 motion variables acquired at 10Hz (we use 72 of them) 4-throw suturing task 15 expert trials 12 intermediate trials each trial roughly 60 seconds in length
4. Move to middle with needle (left hand) 1. Reach for needle 2. Position needle 3. Insert and push needle through tissue 8. Orient needle with both hands Gesture Vocabulary 5. Move to middle with needle (right hand) 6. Pull suture with left hand 7. Pull suture with right hand
Probabilistic Models for Surgical Motions API signals API signals P(Y(t)|C) X(1,t) X(1,t) Local Feature Extraction Local Feature Extraction Feature Normalization Linear Discriminant Analysis Probabilistic (Bayes) Classifier X(t) X(t) L(t) N(t) Y(t) C(t) X(78,t) X(72,t) System Approach
X(kt-m+1) X(kt-m) X(kt+m) X(kt) X(kt+m-1) + + + + + + |L(kt)| = (2m+1)|X(kt)| Example: m=5, |L| = 792 Local Feature Extractions
Probabilistic Models for Surgical Motions API signals API signals P(Y(t)|C) X(1,t) X(1,t) Local Feature Extraction Local Feature Extraction Feature Normalization Feature Normalization Linear Discriminant Analysis Linear Discriminant Analysis Probabilistic (Bayes) Classifier X(t) X(t) L(t) L(t) N(t) N(t) Y(t) C(t) X(72,t) X(78,t) System Approach
x2 x1 Linear Discriminant Analysis The objective of LDA is to perform dimensionality reduction while preserving as much of the class discriminatory information as possible.
class-labeled motion data reduced dimension motion data LDA expected reduced output dimension Linear Discriminant Analysis where the linear transformation matrix W is estimated by maximizing the Fisher discriminant. Fisher discriminant - ratio of distance between the classes and the average variance of each class
LDA Reduction (6 Labeled Classes, 3 Dimensions)Expert Surgeon
LDA Reduction (6 Labeled Classes, 3 Dimensions)Intermediate Surgeon
Storage Savings of LDA For a 10 minute procedure (6000 input samples)
Probabilistic Models for Surgical Motions Probabilistic Models for Surgical Motions API signals API signals P(Y(t)|C) P(Y(t)|C) X(1,t) X(1,t) Local Feature Extraction Local Feature Extraction Feature Normalization Feature Normalization Linear Discriminant Analysis Linear Discriminant Analysis Probabilistic (Bayes) Classifier Probabilistic (Bayes) Classifier X(t) X(t) L(t) L(t) N(t) N(t) Y(t) Y(t) C(t) C(t) X(78,t) X(78,t) System Approach
Results • ‘Leave 2 out’ cross-validation paradigm used. 15 expert trials, 15 rounds Training set (13) Test set (2) • Output of 2 test trials were compared against the manually labeled data. • The average across the 15 tests was used to measure performance.
Contributions • An automated and space efficient method to accurately parse raw motion-data into a labeled sequence of surgical motions. • Linear discriminant analysis is a useful tool for separating surgical motions. • Results support previous work that there exist quantitative differences in the varying skill levels of surgeons.
Future Work • Currently getting synchronized stereo video and API data. Will allow vision-based segmentation methods to complement our statistical methods. • Apply to a larger set of expert surgeons to other representative surgical tasks. • Create performance metrics to be used as benchmarks for surgical skill evaluation.
Acknowledgements • Minimally Invasive Surgical Training Center at the Johns Hopkins Medical School (MISTC-JHU) - Dr. Randy Brown, Sue Eller • Intuitive Surgical Inc. - Chris Hasser, Rajesh Kumar • National Science Foundation
Thank you! Any questions? Automatic Detection and Segmentation of Robot-Assisted Surgical Motions Henry C. Lin, Dr. Izhak Shafran, Todd E. Murphy, Dr. David D. Yuh, Dr. Allison M. Okamura, Dr Gregory D. Hager