450 likes | 465 Views
This research aims to provide multi-lingual and deviceless computer access for disabled users using hand gestures. By mapping gestures to characters, it offers a simple and cost-effective way for individuals with disabilities to communicate and control computers. The study explores a novel approach that captures user gestures, generates codes, and executes operations based on the gestures. By eliminating the need for glove-based sensing technologies, it enhances natural human-computer interaction for disabled users. This innovative system involves image capturing, pre-processing, edge detection, tracking, code generation, and action execution stages. The methodology focuses on mapping hand gestures to language characters to facilitate communication and computer access for disabled individuals.
E N D
MULTI-LINGUAL AND DEVICELESSCOMPUTER ACCESS FOR DISABLEDUSERS C.Premnath and J.Ravikumar S.S.N. College of Engineering TamilNadu
Abstract • Hand forms one of the most effective interaction tool for HCI. • Currently, the only technology that satisfies the advanced requirements is glove-based sensing. • We present a new tool for gesture analysis, by a simple gesture-symbol/character mapping esp. suited to the disabled users.
AIM : To use the hand gestures of the user to help them control/access the computer easily OBJECTIVE : • To provide simple and cheap system of communication to people with single or multiple disabilities • To overcome the language problems in communication/computer access.
INTRODUCTION • Difficulties and impairments reduce computer use . • Direct use of the hand as an input device is an attractive method for providing natural human-computer interaction (HCI) .
Currently, the only technology that satisfies the advanced requirements of hand-based input for HCI is glove based sensing. • Hinders the ease and naturalness with which the user can interact with the computer controlled environment • Requires long calibration and setup procedures.
Glove-Based Approach • Basic operation is to sense the gesture by electric/magnetic contact or by monitoring threshold values in chemical/electrode based sensors. • The sensors are connected to a control unit to find the gesture. • Cost, dexterity and flexibility.
Hinders the ease and naturalness with which the user can interact with the computer controlled environment • Requires long calibration and setup procedures.
Computer vision has the potential to provide much more natural, non-contact solutions.
Gesture recognition methods • Model-based approach • Appearance-based approach
Model-Based Approach • Model based approaches estimate the position of a hand by projecting a 3-D hand model to image space and comparing it with image features. • The steps involved are: • Extracting a set of features from the input images • Projecting the model on the scene (or back-projecting the image features in 3D) • Establishing a correspondence between groups of model and image features
Appearance-Based Approach • Appearance based approaches estimate hand postures directly from the images after learning the mapping from image feature space to hand configuration space. • The feature vectors obtained is compared against user templates to determine the user whose hand photograph was taken.
Require considerable research on mapping and other relevant work. • Actually allow us to create simple and cost effective systems.
Systems providing computer-access for people with disabilities • JavaSpeak • Parse the program and "speak" the program’s structure to a blind user • ViaVoice, which has a published API, is used as the speech reader.
Emacspeak • Provides functions geared directly to programming. • Only for someone familiar with a UNIX environment.
METHODOLOGY • Novel approach of mapping the character set of the language with the possible set of hand gestures and executes the actions mapped for the particular gesture. • Capture the user’s gesture • Manipulate and create a 5-digit code • Execute required system operation • User-friendliness – providing audio request.
The phases involved • IMAGE CAPTURING • PRE-PROCESSING • EDGE DETECTION • EDGE TRACKING • CODE GENERATION • ACTION EXECUTION
Image Capturing • Setup and capture
Pre-processing • Synthetic image • An arithmetic operation is performed with the different channels
a) Sample input image b) Synthetic image
Edge Detection • Need for edge detection • Edges • Edge detection methods • Fourier domain • Spatial domain
GradientMagnitude operation • Spatial Domain Method • Performs convolution operations on the source image using kernels.
Sample Output a) Synthetic image b) Edge detection output
Edge Tracking • Find critical points.
Lets us see in detail how we trace fingertip shown below . In-depth finger tip image
Tracing for finger valley shown below is done in the exact reverse manner as discussed for finger tip In-depth finger valley image
Output after edge tracking b) Finger length using Pythagoras Theorem. a) Critical points marked with red dot.
CODE GENERATION • Using phalanx information
Values to be assigned • 1 if the finger is open. • 0 if the finger is half-closed i.e., only the proximal phalanx is visible. • Already have data about the full finger length information of the user • During code generation, • 1 assigned when approximate matches with the stored value • 0 when the obtained finger length is half that of the corresponding one in the database.
5 fingers - 2 values each • Overall 32 (2×2×2×2×2) action gestures.
Mulitilingualism • Map the gestures currently associated with only English characters, to the characters in other languages by analyzing the phonetics and their translation to English.
For example, the words involving • Tamil characters , • Hindi characters , • Telugu characters , • Malayalam characters , • Can all be mapped with the English letter ‘A’,
European languages the alphabet is almost similar. • Voice engine support important • Latin • Non-Latin languages where we have no space between words (Hindi and Arabic), are supported by tailoring the • Run-time speech engine Free TTS
ACTION EXECUTION • The tree panel • Acquires the path information of a file/folder whenever that particular file/folder gets selected by the user’s input. • The filename is passed to the speech synthesizer unit and verification done by the user.
JMF player controls the browsing work • For example, if character ‘A’ is passed to the file manager then it passes the next file/folder name starting with the letter ‘A’ to the JMF player. • File operations • Type of the file selected (media/text) and the user’s input gesture. • Pass the file to the JMF player unit • Execute appropriate operations
Features • Minimized cost and user friendliness of the project. • Flexibility to change the gesture mapping based upon user’s comfort • Ambidexterity
Limitations • Gesture mapping for languages with large character sets like Chinese and Japanese. • Voice support from the speech engine
Conclusion • Novel approach for providing computer access to disabled user with a multilingual method. • Overcomes the problem of user’s age involved and physical measures. • Support for the illiterate users.
FUTURE WORK • Both the hands as input aided with touch pad technology for the computer access. • 1024 (210 values - taking 2 values for each of the ten fingers) • Assume the ten bit code 10000 10010 is associated with the word “Pause”, then the system would type the word “Pause” if the environment is a text editor and PAUSE the current music track if the environment is a music player.
Map the gestures with system commands. • Other applications currently inaccessible for disabled users.
The project has proposed an ambidextrous system where the computer access is all within your 5 fingers and the proposed enhancement has the potential to bring the world in your hands.
Thank you for listening patiently to our work. QUESTIONS ? ravikumar.jayaraman@gmail.com, prem86ssn@gmail.com