1 / 20

Active Learning: Class Questions

Active Learning: Class Questions. Meeting 10 — Feb 14, 2013 CSCE 6933 Rodney Nielsen. Your Questions.

judson
Download Presentation

Active Learning: Class Questions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Active Learning: Class Questions Meeting 10 — Feb 14, 2013 CSCE 6933 Rodney Nielsen

  2. Your Questions • The article states "The QBC algorithm assumes that the data is noise free, a perfect deterministic classifier exists, and all the classifiers in the committee agree with each other after full annotation.  In a real world case, these assumptions are usually not true, and the effectiveness of QBC is not clear.” • Are these two assumptions (the data is noise free, a perfect deterministic classifier exists) generally agreed upon?

  3. Your Questions • Also the effectiveness of QBC is not clear?

  4. Your Questions • The authors state that "This loop keeps going until the annotator stops or the database is fully annotated." Do they mean fully annotating the entire database? • I thought the point of active learning was that you didn't have to annotate everything.

  5. Your Questions • When the system is tested on real database (3D model database), why is the average matching error rate has increase with active learning when compared to random sampling initially?

  6. Your Questions • Why is the annotation termed 'hidden’?

  7. Your Questions • Is testing on a synthetic database credible?

  8. Your Questions • Why is hidden annotation better than relevance feedback?

  9. Your Questions • How is uncertainty measure defined using multiple attributes?

  10. Your Questions • Is there any better ways, possibilities, instead of asking annotators to lable, machines can get information to lable?

  11. Your Questions • I think in many cases, estimate is very hard. As in the paper, “For objects that have not been annotated, the learning algorithm estimates their probabilities.” • I think usually these kind of estimates are not reliable, not good, and also the same in the estimate of selecting the most informative samples. • You ask annotators to lable because lacking a lot of information. Since lacking such information, isn't it hard to get meaningful calculation results?

  12. Your Questions • How do we know which technique to use I.e. "relevance feedback" or "hidden annotation"..??

  13. Your Questions • Why did they use two distance measures (semantic distance and Euclidean distance) instead of only semantic one?

  14. Your Questions • As I understand, when they do retrieval process, they work with all objects (annotated and non-annotated) and they use its semantic measure to know the error of its systems; and they don't use user's feedback. • If this is true, how can they know that this measure is correct?

  15. Your Questions • Knowledge Gain (p. 10) • Suppose we have two objects that have the same uncertainty: one is at a high probability region in the low-level feature space where many other objects’ feature vectors lie, and the other is at a very low probability region. Annotating these two objects will definitely give the system different amounts of information, which in turn leads to different retrieval performance. • How exactly does the system get different amounts of information? Can you help decipher the knowledge gain formula? Specifically, what does the probability density function look like? (Looks like it is explained more in section III.A, but I still can't wrap my head around it.)

  16. Your Questions • Figure 7 (p. 23) • The figure is nearly undecipherable because of the overlapping symbols. Is there a better method for creating plots like this where the data points for several classes are so close together?

  17. Your Questions • Performance Graphs • I often see plots with performance on the y-axis and number of labeled instances on the x-axis, and there is a line for each method (active vs. random). Seeing these graphs, I often imagine a dual of the graph that might make reading it easier be performance on the y-axis and the number of additional labeled instances required by the worse model to obtain the same level of performance on the x-axis. Is this a good idea?

  18. Your Questions • When using the selective sampling, does the knowledge gain method evaluates all possible annotation candidates? or just the ones which are in the neighborhood?

  19. Your Questions • How to deal with over fitting when applying selective sampling?

  20. Questions • ???

More Related