1 / 5

Presentation- Week 9

Presentation- Week 9. Maya Shoham. Baselines. Eth80 Database 30 Train, 50 Test 70.25% Accuracy The paper that used the eth80 database used all 400 pictures for training + test and got a best classification rate of 83%. Caltech101 Database 30 train, 1-50 test

Download Presentation

Presentation- Week 9

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Presentation- Week 9 Maya Shoham

  2. Baselines • Eth80 Database • 30 Train, 50 Test • 70.25% Accuracy • The paper that used the eth80 database used all 400 pictures for training + test and got a best classification rate of 83%. • Caltech101 Database • 30 train, 1-50 test • Lambda=20, Iterations=5000, 20% accuracy • Possibly not optimal lambda, but code takes about 9 hrs to run so its hard to check multiple lambdas.

  3. Other Changes • Split up the feature vectors into training and testing before the kernel matrix is generated. • Allows for greater flexibility in generating different kernel matrices. • Sped up the softmax logistic regression

  4. Training the Level Weights • Feature vectors are 4200 x number of images. • 4200 corresponds to the 21 bins x 200 clusters. • Kernel Matrix to train the level weights is a 4200 x 4200.

  5. Whats Next? • Should the level weights be weighted by feature? (4200 weights) or by bin? (21 weights) • How do we label the training kernel for the learning of the level weights? • Write code to convert from the 4200x4200 kernel matrix to the #images x #images kernel matrix so that we can alternately weight the levels weights and the other weights.

More Related