160 likes | 295 Views
Text Extraction from an Image Using Neural Networks. Guided By Miss.S.V.Ananthi. B.E. Team Members S.Shenbagavalli(1104148) T.Shyamala(1104150) K.Poornapushkala(1104161 ). Abstract:.
E N D
Text Extraction from an Image Using Neural Networks Guided By Miss.S.V.Ananthi. B.E. Team Members S.Shenbagavalli(1104148) T.Shyamala(1104150) K.Poornapushkala(1104161)
Abstract: In our project we present a new text extraction method based on discrete wavelet transform and neural network. The method successfully extracts features of candidate text regions using discrete wavelet transform. A neural network based on back propagation algorithm (BP) is trained according to these features. The final network output of real text regions is different from those non-text regions.
Flow Chart of Proposed System Image Haar DWT Featute Extraction Text extraction using Neural Network Text Regions Extraction results
Haar DWT LL HL HH LH The Results of 2D DWT decomposition
A B C D E F G H I J K L M N O P (A+B) (C+D) (A-B) (C-D) (E+F) (G+H) (E-F) (G-H) (I + J) ( K+ L) (I – J) (K –L) (M+N) (O+P) (M-N) (O-P) Cntd.. The Row operation of 2D haar DWT The original image (A+B)+(E+F) (C+D)+(G+H) (A-B)+(E-F) (C-D)+(G-H) (I+J)+(M+N) (K+L)+(O+P) (I –J)+(M–N) (K-L)+(O-P) (A+B)-(E+F) (C+D)-(G+H) (A-B)-(E-F) (C-D)-(G-H) (I+J)-(M+N) (K+L)-(O+P) (I –J)-(M–N) (K-L)-(O-P) The Column operation of 2D haar DWT
Back Propagation Preparation Training Set : • A collection of input-output patterns that are used to train the network Testing Set : • A collection of input-output patterns that are used to assess network performance Learning Rate-η : • A scalar parameter, analogous to step size in numerical integration, used to set the rate of adjustments
Back Propagation Preparation Training Set : • After applying DWT, one pixel of the input image is converted • into 3 features by using the formula,
Proposed Architecture of Neural Network Output Node Hidden Node Input Node LH HL HH
Back Propagation Algorithm Initialize the weights Repeat For each training pattern Train on that pattern End Until the error is acceptably low.
Steps in BPN 1. Feed Forward computation 2. BackPropagation to the output layer 3. BackPropagation to the hidden layer 4. Weight Updates The algorithm is stopped when the value of the Function has become sufficiently small.
Back Propagation Preparation Testing Set :
Yet Now we done…. The Original Image DWT co- efficients
Yet Now we done…. The Original Image DWT co- efficients
Yet Now we done…. The Original Image DWT co- efficients