1 / 45

Deep Convolutional Neural Network for Hyperspectral Image Classification

Mengmeng Zhang Beijing University of Chemical Technology. Deep Convolutional Neural Network for Hyperspectral Image Classification. Hyperspectral Imagery. Features Hundreds of bands Narrow spectral range Continuous wavelength Large amount of data Redundant information. Advantages

dorthy
Download Presentation

Deep Convolutional Neural Network for Hyperspectral Image Classification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mengmeng Zhang Beijing University of Chemical Technology Deep Convolutional Neural Network for Hyperspectral Image Classification

  2. Hyperspectral Imagery Features • Hundreds of bands • Narrow spectral range • Continuous wavelength • Large amount of data • Redundant information Advantages • Analysis the objects with spectral features • Fine classification and detection for objects

  3. Hyperspectral Imagery • Remote Sensing • Target detection • Urban classification • Resources investigation • Medical HSI • Disease diagnosis • Surgery guidance • Food Safety and Security • Pesticide residue • Pest detection

  4. Search in Web of Science

  5. Convolutional Neural Network Features • Local field(Sparse Connectivity):enforcing a local connectivity pattern between neurons of adjacent layers • Shared weights: shared weights and bias in replicated convolutional kernel • Pooling:reducing the number of neural units through down-sampling

  6. Apply CNN in HSI Classification • CNN is employed with multiple layers for HSI classification. • Each pixel is viewed as a 2D image whose height is equal to 1. • The shallow net contains the input layer, convolutional layer, max pooling layer, and full connection layer. W. Hu, Y. Huang, W. Li, F. Zhang, and H. Li, “Convolutional Neural Networks for Hyperspectral Image Classification,” Journal of Sensors, vol. 2015, article ID 258619, 12 pages, 2015. [130+ citations]

  7. Apply CNN in HSI Classification • CNN models are heavily parameterized and enormous amounts of training data are required to ensure performance. • The acquisition of HSI with ground-truth is very expensive and time-consuming. • In HSI, the data structure is complex. • In HSI, neighboring pixels tend to belong to the same class with high probability. AlexNet: 60M parameters ≈10000 labeled data

  8. Hyperspectral Image Classification UsingDeep Pixel-Pair Features

  9. Pixel-Pair Model of Training Samples In doing so, for each class: The number of total samples is, much larger than the original size. Benefit 1: Pixel-pair model can generate sufficient input data to learn the parameters in the CNN architecture.

  10. Feature Extraction Using CNN Benefit 2: PPFs with multiple layers tend to be more discriminative and reliable.

  11. Joint Classification with Voting Strategy Benefit 3: neighboring pixels tend to belong to the same class with high probability, which inspires us to build a joint classification with voting strategy during the testing process.

  12. Compare With Siamese Neural Network (SNN) G. Koch, R. Zemel, R. Salakhutdinov, “Siamese Neural Networks for One Shot Image Recognition,” Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 2015, vol. 37, May 2017. SNN PPF

  13. Indian PinesHSI Dataset • Sensor: AVIRIS • Location: Indian Pines, Indiana • Spatial Resolution: 145×145, 20m • Spectrum: 0.4-2.45um, 220 bands • Classes: 16 (8 used)

  14. University of Pavia HSI Dataset • Sensor: ROSIS • Location: Pavia, Italy • Spatial Resolution: 610×340, 1.3m • Spectrum: 0.43-0.86um, 103 bands

  15. Classification Performance

  16. Classification Performance (a) Indian Pines data (b) University of Pavia data W. Li, G. Wu, F. Zhang, and Q. Du, “Hyperspectral Image Classification Using Deep Pixel-Pair Features,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 2, 844-853, Feb. 2017.

  17. Diverse Region-Based CNN for Hyperspectral Image Classification

  18. Diverse Region-Based Input • Appropriate window size? • Enough spatial detail? • Strong generalization capability of environmental context? Spatial search window based methods are generally utilized for taking advantage of spatial information of HSI. Benefit 1: Diverse Region-Based Input can integrate sufficient spatial information: Pure appearance characteristics; Distinct appearance of different regions; Global context appearance; Local context appearance.

  19. Diverse Region-Based CNN (DR-CNN) Benefit 2: Diverse Input with adapted CNN branches tend to be more discriminative and reliable.

  20. Feature Extraction Branch in DR-CNN • This branch with”multi-scale summation” module can extract contextual characteristics presented in each half region. • This branch is designed to extract pure spectral features of each central pixel (3*3 block). • Six branches are trained individually, and then, they are combined and fed into this feature integration and classification branch.

  21. Data Augmentation in DR-CNN • Only a few labeled samples may be available in practice, restricting the learning effect of CNN. • Two-step data augmentation strategy are used. The first step involves data flipping. The second step is to add small Gaussian noise to the flipped data. Benefit 3: The number of training samples can be increased by a factor of two, ensuring more accurate estimation of parameters (DR-CNN).

  22. Classification Performance • From the figure, the classification performance tends to be satisfied when the square-shaped region is 11*11. Thus, we obtain diverse input region of DR-CNN as shown in table below.

  23. Classification Performance

  24. Classification Performance M. Zhang, W. Li, and Q. Du, “Diverse Region-Based CNN for Hyperspectral Image Classification,” IEEE Transactions on Image Processing, vol. 27, no.6, pp.2623-2634, 2018.

  25. Multi-Sourse Remote Sensing HRS IR MS HRS PAN MS PAN Visible Infrared Spatial Information Network

  26. Multi-Source Remote Sensing Data Classification Based on Deep CNN

  27. Two-Branch CNN Architecture • Two different CNN branches are utilized to extract features from HSI and LiDAR data. • The spatial and spectral features of HSI are first integrated in a dual-tunnel branch, and features extracted from LiDAR. • Two different source features/ are concatenated and fed into full-connect layer. • Two branches are trained individually, and then, they are combined by fine-tuning.

  28. Dual-Tunnel CNN Branch in HSI • Same operations in 2D and 1D tunnel in the branch of HSI; • 2D CNN focuses on spatial and 1D focuses on spectral; • Flatten spatial and spectral features are merged before full-connection. Benefit 1: Spatial-spectral information in HSI is fully-used.

  29. Cascade-Block CNN Branch in LiDAR • Features extracted from the LiDAR through a convolution and two cascade blocks. • A cascade is designed for merge different level features and feature reuse includes convolution, batch-norm and activation. • Two path bridge between two convolutions and activation operations. Benefit 2: Cascade network passes multi-scale features to output.

  30. Cascade-Block CNN Branch in LiDAR *CIFAR-10: 60000 32x32 colour images in 10 classes, 50000 training / 10000 test *Cell Nuclei Datasets: 22444 36x36 colour images in 4 classes, 11222 training / 11222 test

  31. Datasets-HSI and LiDAR HSI HSI LiDAR LiDAR Ground truth Ground truth Houston: 15 classes Trento: 6 classes

  32. Classification Performance

  33. Results- Houston Data

  34. Results- Houston Data 1)Ground truth 2)SVM:80.49% 3)ELM:81.92% 4)CNN-PPF:83.33% 5)Proposed:87.98%

  35. Results- Trento Data X. Xu, W. Li, Q. Ran, Q. Du, L. Gao, and B. Zhang “Multi-source Remote Sensing Data Classification Based on Convolutional Neural Network,” IEEE Transactions on Geoscience and Remote Sensing, vol. pp, no. 99, 1-13, 2017.

  36. Feature Extraction for Classification of Hyperspectral and LiDAR Data Using Patch-to-Patch CNN

  37. Whole Framework based on PToP CNN … The proposed framework includes: … • A three-tower feature extractor called PToP CNN (Part I) • A hierarchical fusion module (Part II) • A classifier containing fully connected layers with softmax loss (Part III). …

  38. Feature Integration Using PToP CNN Benefit 1: PToP CNN integrates two-domain translation during feature extraction process, thus enabling the seamless fusion of HSI and LiDAR data.

  39. Hierarchical Fusion Module … … … … Benefit 2: the hierarchical fusion module can play a role in structuring the integration hierarchy, extracting features of diverse hierarchies, including different filter scales of convolution and different convolutional layers

  40. Datasets-HSI and LiDAR HSI HSI LiDAR LiDAR Ground truth Ground truth Houston: 15 classes Trento: 6 classes

  41. Classification Performance

  42. Classification Performance

  43. Classification Performance M. Zhang, W. Li, Q. Du, L. Gao, and B. Zhang, “Feature Extraction for Classification of Hyperspectral and LiDAR Data Using Patch-to-Patch CNN,” IEEE Transactions on Cybernetics, pp.1-12, DOI: 10.1109/TCYB.2018.2864670, 2018.

  44. Conclusions • Deep CNNs have great advantage in feature extracting, because they have better designs for modeling and training and get more discriminative features. • For better applications in HSI, open issues still exist: • Few labeled samples (quantity & quality), how to expand training set? • Transferable features which generalize well to new tasks? • Multi-source remote sensing data feature extraction, effective or not?

  45. Thanks!!

More Related