1 / 26

Cascaded Classifier for Automatic Crater Detection

Cascaded Classifier for Automatic Crater Detection. Henry Z. Lo Advisor: Wei Ding Domain Scientist: Tomasz Stepinski Knowledge Discovery Lab University of Massachusetts Boston. Overview. Introduction: Cascading classifier. Experimental road map. Experiments: Tests on feature sets.

meryle
Download Presentation

Cascaded Classifier for Automatic Crater Detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cascaded Classifier for Automatic Crater Detection Henry Z. Lo Advisor: Wei Ding Domain Scientist: Tomasz Stepinski Knowledge Discovery Lab University of Massachusetts Boston

  2. Overview • Introduction: • Cascading classifier. • Experimental road map. • Experiments: • Tests on feature sets. • Tests on positive example training set content. • Tests on negative example training set size. • Tests on negative example training set content.  • Discussion: • Implications of results. • Unresolved issues. • Future directions.

  3. Cascading Classifier • Architecture: • Layers of Adaboost classifiers. • Each layer trained on the FP of previous layer. • Input must be accepted by all, sequentially, to be considered a crater. • Rejection can happen at any stage.

  4. Cascading Classifier • Features: • Exclusively uses Haar-like features. • Can be calculated in constant time. • Contrast based. • Scanned over entire subwindow.

  5. Cascading Classifier • Implementation: • Used OpenCV implementation. • Free and open source. • Many variables: • Number of layers. • "Minimum hit rate" - false positive rate. • "Max false alarm" - false negative rate. • 3 feature sets.

  6. Experimental Road Map • Tweak for performance: • OpenCV parameters. • Features. • Training set. • The following OpenCV parameters improve performance: • Minimum hit rate. • Max false alarm. • Number of layers. • Still need to tweak features and training sets for: • Training time. • Generalizability. • L

  7. Experimental Road Map

  8. Experimental Road Map • Each of these factors will be tested individually for effect on precision, recall, and F1. • We avoid studying interaction effects for simplicity. • In the future, we will investigate how to combine different features and test sets for optimal result.

  9. Experimental Road Map • We use tile 3-24 for both training and testing. • This tile was chosen for its relatively smooth surface. • Future studies will test on other tiles as well.

  10. Feature Set Variation

  11. Feature Set Variation • OpenCV offers 3 different feature sets: • CORE:      1a, 1b, 2a, 2c. • BASIC:      CORE + 2b, 2d, 3a • ALL:          all features • Since ALL is a superset of CORE and BASIC, it should perform best.

  12. Feature Set Variation • In recall, CORE and BASIC outperformed ALL. • In precision and F1, the exact opposite was true.

  13. Haar Features

  14. Haar Features • Inclusion of tilted features beneficial to performance. • More features than those given may provide further benefit. • It is not obvious how to create Haar features in OpenCV. • Postponing creation of specialized Haar features.

  15. Ground Truth Windows

  16. Ground Truth Windows • Positive examples contained tightly cropped craters. • No crater rims or surrounding area. • Experimented with including area around craters.  • Range: 1x crater radius - 2x crater radius, in steps of .1.               1.0    1.2     1.4       1.6       1.8         2.0

  17. Ground Truth Windows • As the subwindow increased, precision and F1 increased. • However, recall suffered.

  18. Negative Example Set Size

  19. Negative Example Set Size • All classifiers tested were trained on 300 negative examples. • By providing the classifier with more negative examples, we give it more information. • Performance should increase with more negative examples. • Tested classifiers trained on 300, 400, 500, 600, and 700 negative examples.

  20. Negative Example Set Size • F1 and precision increase with more negative examples. • Recall decreases.

  21. Negative Example Manipulation

  22. Negative Example Manipulation • The idea is to put some false positives back into the training set. • This will teach the classifier using its own mistakes. • However, selecting the false positives is rather difficult, as we will see later.

  23. Result Implications • Window scaling has the most noticeable effect on F1, recall, and precision. • Next most important is the feature set used. • The number of negative training examples is the least important; however, this may be due to the small range of values being tested.

  24. Future Directions • Once optimal features and training sets are found, we can manipulate OpenCV variables.  • Recall that the classifier may be improved by the following: • More layers in the classifier. • Setting the minimum hit rate (recall). • Setting the max false alarm rate (precision). • Time complexity of classifier training requires further study.

  25. Future Directions • Further exploration of cascaded classification algorithm: • Testing classifier on other tiles. • Exploration of other object detection algorithms. • Neural networks.

  26. Questions?

More Related