1 / 20

Identifying Fractures in BioSense Radiology Reports

The BioSense System. BioSense is a national program developed by Centers for Disease Control and PreventionProvides real-time biosurveillance and health situational awareness The system receives data from >370 non-federal hospitals, >1100 ambulatory care Departments of Defense (DoD) and Veterans Affairs (VA) medical facilities Some hospitals send text radiology reports (n=42), microbiological laboratory reports (n=30), and pharmaceutical receipt (n=31).

Patman
Download Presentation

Identifying Fractures in BioSense Radiology Reports

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Identifying Fractures in BioSense Radiology Reports Achintya N. Dey,MA; Haobo Ma, MD, MS; Armen Asatryan, MD, MPH; Roseanne English, BS; and Jerome Tokars, MD, MPH National Center for Public Health Informatics Centers or Disease Control and Prevention Department of Health and Human Services The findings and conclusions in this presentation are those of the authors(s) and do not necessarily represent the views of the Centers for Disease Control and Prevention Today I’m here presenting you our preliminary research on identifying fractures in radiology reports in BioSense system Dr. Haobo Ma, Dr. Armen Asatryan, Roseanne English and Dr. Jerry Tokars are my other co-authors Today I’m here presenting you our preliminary research on identifying fractures in radiology reports in BioSense system Dr. Haobo Ma, Dr. Armen Asatryan, Roseanne English and Dr. Jerry Tokars are my other co-authors

    2. The BioSense System BioSense is a national program developed by Centers for Disease Control and Prevention Provides real-time biosurveillance and health situational awareness The system receives data from >370 non-federal hospitals, >1100 ambulatory care Departments of Defense (DoD) and Veterans Affairs (VA) medical facilities Some hospitals send text radiology reports (n=42), microbiological laboratory reports (n=30), and pharmaceutical receipt (n=31) Data for this study came from the BioSense which is a national program developed by Centers for Disease Control and Prevention. BioSense provides real-time bio-surveillance and health situational awareness for public health through use of existing data from healthcare organizations. The system receives both chief complaint and diagnosis data from over 360 hospitals. In addition to chief complaint and diagnosis data, some hospitals send text radiology reports, microbiological lab reports and pharmaceutical receipt. Data for this study came from the BioSense which is a national program developed by Centers for Disease Control and Prevention. BioSense provides real-time bio-surveillance and health situational awareness for public health through use of existing data from healthcare organizations. The system receives both chief complaint and diagnosis data from over 360 hospitals. In addition to chief complaint and diagnosis data, some hospitals send text radiology reports, microbiological lab reports and pharmaceutical receipt.

    3. Background Since 2003, BioSense has tracked 11 syndromes potentially associated with infections and bioterrorism Since 2007, BioSense has also tracked 78 sub-syndromes, 11 of which represent injuries, including fracture Fractures can be identified from chief complaints, ICD-9 coded diagnosis, or text radiology reports Since 2003, BioSense has been tracking 11 syndromes potentially associated with infections and bioterrorism Since 2007, BioSense has also tracked 78 sub-syndromes and 11 of which represent injuries including fracture. Fractures can be identified from chief complaints, ICD-9 coded diagnosis, or text radiology reports. Since 2003, BioSense has been tracking 11 syndromes potentially associated with infections and bioterrorism Since 2007, BioSense has also tracked 78 sub-syndromes and 11 of which represent injuries including fracture. Fractures can be identified from chief complaints, ICD-9 coded diagnosis, or text radiology reports.

    4. Background (cont.) Chief complain data have limited accuracy Final ICD-9 coded diagnoses are often not available for 1-3 weeks Text radiology reports are available in some hospital systems with 1-2 days In public health emergencies a rapid and valid way identify fractures has potential utility Previous research has shown the chief complaint data have limited accuracy (ref: Wendy Chapman and John Dowling. Can Chief Complaint Identify Patients with Febrile Syndromes? Advances in Diseases Surveillance 2007; 3:6) Final ICD-9 coded diagnoses often take 1-3 weeks. Whereas, text radiology reports, which are available in some hospitals with 1-2 days, may provide a rapid and valid way to identify fractures. Previous research has shown the chief complaint data have limited accuracy (ref: Wendy Chapman and John Dowling. Can Chief Complaint Identify Patients with Febrile Syndromes? Advances in Diseases Surveillance 2007; 3:6) Final ICD-9 coded diagnoses often take 1-3 weeks. Whereas, text radiology reports, which are available in some hospitals with 1-2 days, may provide a rapid and valid way to identify fractures.

    5. Objectives Develop and evaluate a keyword-based text parsing algorithm for identifying fractures in radiology text reports Compare fractures identified by radiology text reports, chief complaints, and final ICD-9 diagnoses Determine number of fractures identified in radiology text reports The objectives are: One to develop and evaluate a keyword-based text parsing algorithm for identifying fractures in radiology reports. Then compare radiology text results with chief complaint and ICD-9 diagnoses And finally determine number of fractures identified in radiology text reportsThe objectives are: One to develop and evaluate a keyword-based text parsing algorithm for identifying fractures in radiology reports. Then compare radiology text results with chief complaint and ICD-9 diagnoses And finally determine number of fractures identified in radiology text reports

    6. Methods Studied 16,940 visits with skeletal films and final ICD-9 coded diagnosis from 13 hospitals Study period March 1, 2006 -December 31, 2006 ICD-9 diagnosis codes 800-829 indicated fracture Demographic characteristics of patients with and without fracture diagnoses were compared Chief complaints and final diagnoses were compared with text parsing results We studied 16,940 visits with skeletal film and final ICD-9 coded diagnosis from 13 hospitals during March 1st , 2006 thru December 31st, 2006. ICD-9 codes from 800 thru 829 indicate fracture Next, we compared the demographic characteristics of patients with and without fracture diagnoses Finally we compared the text parsing results with chief complaints and final diagnoses We studied 16,940 visits with skeletal film and final ICD-9 coded diagnosis from 13 hospitals during March 1st , 2006 thru December 31st, 2006. ICD-9 codes from 800 thru 829 indicate fracture Next, we compared the demographic characteristics of patients with and without fracture diagnoses Finally we compared the text parsing results with chief complaints and final diagnoses

    7. Methods (cont.) Developed keyword-based text parsing program using SAS to identify fractures in radiology text reports Used words such as “fracture” and “fx” to find fractures Used negation words like “no evidence,” “absence,” “fails to reveal,” “negative,” “healed,” “old,” “lack of” etc. to eliminate readings indication no fracture To identify fractures from text radiology reports, we developed a keyword-based text parsing program using SAS software. We used words such as “fracture” and “fx” to find FRACTURE and used negation words like “no evidence” , “absence”, “fails to reveal”, “negative”, “healed”, “lack of”, etc to eliminate readings indication to NO FRACTURE. To identify fractures from text radiology reports, we developed a keyword-based text parsing program using SAS software. We used words such as “fracture” and “fx” to find FRACTURE and used negation words like “no evidence” , “absence”, “fails to reveal”, “negative”, “healed”, “lack of”, etc to eliminate readings indication to NO FRACTURE.

    8. Sample Radiology Text # 1 History: Trauma and right ankle pain. Findings: Frontal, oblique and lateral images of the right ankle demonstrate a complete oblique fracture of the distal fibula with 3 mm lateral displacement of the distal fragment. There is a tiny chip fracture of the anterior distal tibia seen on the lateral image. IMPRESION: 1. Complete oblique fracture of the distal fibula with lateral displacement approximately 3 mm. 2. Tiny chip fracture of the anterior distal tibia seen on the lateral image. This is an example of text radiology report indicate fractureThis is an example of text radiology report indicate fracture

    9. Sample Radiology Text # 2 History: Fall Findings: Two views of the right forearm show normal alignment. Bone density is slightly diminished suggesting osteoporosis. There are mild degenerative changes at the elbow and wrist. There is no evidence of fracture. IMPRESION: Negative right forearm x-ray. There is no evidence of fracture or joint effusion. This is an example of text radiology report indicate no fractureThis is an example of text radiology report indicate no fracture

    10. Methods (cont.) Final results coded as fracture or no fracture To validate text parsing, 100 randomly selected radiology reports with fracture and 100 without fracture were reviewed by a human reader Final results were then coded into fracture and no fracture To validate this algorithm, we randomly selected 100 radiology reports with fracture and 100 without fracture. These 200 radiology reports were then reviewed by a human reader Final results were then coded into fracture and no fracture To validate this algorithm, we randomly selected 100 radiology reports with fracture and 100 without fracture. These 200 radiology reports were then reviewed by a human reader

    11. Sample Size Selection This chart shows how we selected our sample Next few slides I’ll be presenting our results This chart shows how we selected our sample Next few slides I’ll be presenting our results

    12. Results: Patient Class Among All Patients Having Skeletal X-Rays (N=16,940) Of all visits (17,647), 73% were emergency department, 19% were inpatients, and 8% were outpatients. Of all visits (17,647), 73% were emergency department, 19% were inpatients, and 8% were outpatients.

    13. Mean Age of Patients, by Final Diagnosis of Fracture and Sex This figure shows the mean age of patients by final diagnosis of fracture and sex The average age for women with fracture was higher than the average age for men (55 years vs. 37 years) This figure shows the mean age of patients by final diagnosis of fracture and sex The average age for women with fracture was higher than the average age for men (55 years vs. 37 years)

    14. Most Common Fracture Sites Identified from Radiology Text Reports (N=3,891) This figure shows the common fracture sites The five most common sites of fracture were hand (12%), wrist (11%), ankle (10%), foot (10%) and finger (9%). Others category include: hip (7%), shoulder (4%), forearm (4%), spine (4%), elbow (4%), knee (4%), ribs (3%), tibia (3%), nasal (2%), toe (2%), clavicle (2%), humerus (2%), pelvis (2%), facial (1%), infant lower extremities (0.5%), abdomen (0.5%), coccyx (0.2%), heel (0.3%), sacrum (0.2%), scapula (0.1%), and skull (0.03%). This figure shows the common fracture sites The five most common sites of fracture were hand (12%), wrist (11%), ankle (10%), foot (10%) and finger (9%). Others category include: hip (7%), shoulder (4%), forearm (4%), spine (4%), elbow (4%), knee (4%), ribs (3%), tibia (3%), nasal (2%), toe (2%), clavicle (2%), humerus (2%), pelvis (2%), facial (1%), infant lower extremities (0.5%), abdomen (0.5%), coccyx (0.2%), heel (0.3%), sacrum (0.2%), scapula (0.1%), and skull (0.03%).

    15. Fracture Identification by ICD-9 Diagnosis and Chief Complaint in All Fracture and No Fracture Sample (N=200) This table compare ICD-9 coded diagnoses with chief complaint. There are only 12 cases were both ICD-9 diagnosis and chief complain agreed as a fracture. On 81 cases where CC data showed no fracture but ICD-9 showed as a fracture And the Kappa is 0.14 This table compare ICD-9 coded diagnoses with chief complaint. There are only 12 cases were both ICD-9 diagnosis and chief complain agreed as a fracture. On 81 cases where CC data showed no fracture but ICD-9 showed as a fracture And the Kappa is 0.14

    16. Validation of Text Parsing Method in the Fracture (N=100) and No Fracture (N=100) Sample This table compare the results of text parsing with human reading Among 100 records with fracture identified by text parsing, human review found evidence of fracture in 91, no fracture in 2, and uncertain results in 7. Among 100 records with no fracture identified by text parsing, human review showed fracture in 2 and no fracture in 98 The text parsing algorithm had 0.98 sensitivity and 0.98 specificity compared with the gold standard human review. This table compare the results of text parsing with human reading Among 100 records with fracture identified by text parsing, human review found evidence of fracture in 91, no fracture in 2, and uncertain results in 7. Among 100 records with no fracture identified by text parsing, human review showed fracture in 2 and no fracture in 98 The text parsing algorithm had 0.98 sensitivity and 0.98 specificity compared with the gold standard human review.

    17. Fracture Identification by Text Parsing and ICD-9 Diagnosis in the Fracture (N=100) and No Fracture (N=100) Sample This table compare text parsing with ICD-9 coded diagnoses Among 100 records with fracture found by text parsing, 85 had a diagnosis of fracture; among 100 with no fracture found by text parsing, 8 had a diagnosis of fracture (Kappa=0.77) ICD-9 codes for those 15 cases are mostly open wound (873, 883, 891), sprain & strain (841, 842, 845, 847), dislocation (836)This table compare text parsing with ICD-9 coded diagnoses Among 100 records with fracture found by text parsing, 85 had a diagnosis of fracture; among 100 with no fracture found by text parsing, 8 had a diagnosis of fracture (Kappa=0.77) ICD-9 codes for those 15 cases are mostly open wound (873, 883, 891), sprain & strain (841, 842, 845, 847), dislocation (836)

    18. Fracture Identification by Text Parsing and Final ICD-9 Diagnosis Among All Records This table compare text parsing with ICD-9 coded diagnoses on all records Among 3,891 records with fracture found by text parsing, 3,132 had a diagnosis of fracture; among 13,049 with no fracture found by text parsing, 850 had a diagnosis of fracture (Kappa=0.73) This table compare text parsing with ICD-9 coded diagnoses on all records Among 3,891 records with fracture found by text parsing, 3,132 had a diagnosis of fracture; among 13,049 with no fracture found by text parsing, 850 had a diagnosis of fracture (Kappa=0.73)

    19. Discussion Developed a simple text parsing algorithm that can identify fractures from X-ray text reports with high accuracy The method has potential usefulness for rapid identification of patients with fractures during public health emergencies Radiology reports showed good agreement with discharge ICD-9 codes Disagreement may have occurred because of inaccuracies in the text parsing algorithm, missing radiology or diagnosis data, or problems with accurate linkage of radiology reports with final diagnoses in our electronic system So our results indicated that a simple text parsing algorithm can identify fractures from X-ray reports with acceptable accuracy and has potential usefulness for rapid identification of patients with fractures during PH emergencies. Our results also indicated that radiology reports showed poor agreement with chief complaint data but good agreement with ICD-9 diagnoses. There are several reason why disagreement have occurred: It could be because of inaccuracies in the text parsing algorithm Could be due to missing radiology or ICD-9 diagnoses data Or it could be due to inaccurate linkage of radiology reports with final ICD-9 diagnoses in our electronic system So our results indicated that a simple text parsing algorithm can identify fractures from X-ray reports with acceptable accuracy and has potential usefulness for rapid identification of patients with fractures during PH emergencies. Our results also indicated that radiology reports showed poor agreement with chief complaint data but good agreement with ICD-9 diagnoses. There are several reason why disagreement have occurred: It could be because of inaccuracies in the text parsing algorithm Could be due to missing radiology or ICD-9 diagnoses data Or it could be due to inaccurate linkage of radiology reports with final ICD-9 diagnoses in our electronic system

    20. Discussion (cont.) Earlier studies have shown disagreements between radiographic interpretation by clinicians (whose diagnoses are reflected in ICD-9 codes) vs. radiologists (whose readings are reviewed by our system). One reason for this may be the subtle findings of fracture interpreted differently by clinicians and radiologist Currently assessing errors made by the text parsing method in an attempt to improve it Also plan to assess natural language processing systems that may improve accuracy and extract additional information such as findings other than fracture and degree of certainty Earlier studies have shown disagreement between radiographic interpretation by clinicians and radiologists. Clinicians diagnoses are reflected in ICD-9 codes whereas radiologist readings are reviewed by our system. One reason for this may be the subtle findings of fracture may be interpreted differently by clinicians and radiologist. In order to improve our text parsing algorithm, we are assessing errors made by our program. We are also planning to use natural language processing systems that may improve accuracy and extract additional information such as findings other than fracture and degree of certainty. Earlier studies have shown disagreement between radiographic interpretation by clinicians and radiologists. Clinicians diagnoses are reflected in ICD-9 codes whereas radiologist readings are reviewed by our system. One reason for this may be the subtle findings of fracture may be interpreted differently by clinicians and radiologist. In order to improve our text parsing algorithm, we are assessing errors made by our program. We are also planning to use natural language processing systems that may improve accuracy and extract additional information such as findings other than fracture and degree of certainty.

More Related