120 likes | 196 Views
Augmented Reality Based Body Sound Simulation. Instructed by Dr. Chathura De Silva PhD (NUS-Singapore), MEng (NTU-Singapore), BSc Eng.( Hons ) (Moratuwa) Department of Computer Science and Engineering, University of Moratuwa. Email : chathura@uom.lk. Presented by S.A.M Samarawickrama
E N D
Augmented Reality Based Body Sound Simulation Instructed by Dr. Chathura De Silva PhD (NUS-Singapore), MEng (NTU-Singapore), BSc Eng.(Hons) (Moratuwa) Department of Computer Science and Engineering, University of Moratuwa. Email : chathura@uom.lk Presented by S.A.M Samarawickrama 118229H
Objective of the research • Provide a teaching / practicing tool which makes the tedious and time consuming process of getting used to body sounds patterns, a simple and effective one • For that… • Generating sound map of human body • Simulating the abnormalities which can occur during disease conditions • providing interaction between the simulated conditions Proposed solution
Work done so far • Identification and implementation of main basic software modules. • Recorder module • Trainer module (partial) • Deciding the architecture of sound map • Implementing sound map, and simulating dynamic interaction • Designing and implementing the case file structure (with maximum customizability)
Recorder Module • Records sounds heard at common auscultation points • Collects sound name and associated disease condition data • Configures sound map associated with that particular case • Each case contains configurable unique sound map
Architecture of the sound map • Sounds are recorded at common auscultation sites. • Grid is specified, where within each grid cell the sound is assumed to be having no variations • Grid may be unique for each case • Grid cells which contain common auscultation site will produce the exact recorded sound • Other cells will produce linear combination of sounds which are heard at common auscultation sites.
Architecture of the sound map - justification • Doctors are only concerned about the sounds heard at common auscultation points • Better to have actual sounds at common auscultation sites • User experience is also expressed as a linear combination of sounds heard at common points
The tasks that will be completed in the next two weeks • Starting of camera integration and visual cue processing • Recording sounds from actual patients • Implementing hardware • Recording • Configuring sound maps for recorded cases