120 likes | 407 Views
Model Classification. Model by Barry Hennessy. Model Basis. My model was based on the Prototype model: that previously seen examples are combined into a single example that characterises the set. I it seemed more plausible than the exemplar model:
E N D
Model Classification Model by Barry Hennessy
Model Basis My model was based on the Prototype model: that previously seen examples are combined into a single example that characterises the set. I it seemed more plausible than the exemplar model: In particular it doesn’t require the person to memorise everything they have ever seen. This seems to be ignoring the idea of chunking which has always appealed to me, and is a cornerstone of much in cognitive psychology.
That being said, prototype theory doesn’t seem completely plausible either: There simply is too much variation in ‘things that make up categories’ for there to be simple and strict divisions between them. Although this is somewhat alleviated by conjunctive classification. So I chose Prototype as the foundation of my model...
Irregular Conjunction • I chose not to base my model’s conjunction on any of the given mathematical methods, average, multiplicative etc. • None of these seemed particularly plausible.
Conjunction • I chose instead to base my conjunction on differences between categories. • If a category is within a certain range (confidence parameter) of the maximum then it is considered a conjunction. • Basis: If an item is definitely of one category and probably of another then it is more accurate, and safer, to classify it as both. • Also I felt that one possible classification shouldn’t impinge on the possibilities of others.
Performance • My model classified everything perfectly!...
Problem • My model didn’t output any weights for the conjunction. • So this leaves me with only 5 datapoints to test against • So despite the fact that my model classifies everything correctly I can’t really say how good it is without more test data
Implementation • I implemented my model in C# • It was very object oriented and quite (over) complicated
Conjunction //Regular prototype happens up here foreach(string thisCategory in allPatients.listCategories()) { if (mainCategory!= thisCategory) { //test categories proximity if (Math.Abs(mainCatWeight - testCatWeight) < confusionParam) { //set category to conjunction category = mainCategory+ "&" + thisCategory; } } }
Prototype //find weights for all categories from diagnoseReturningWeights Dictionary<string, double> weightsByCat = diagnoseReturningWeights(pat); //go through each category and select the maximum foreach (string thisCategory in allPatients.listCategories()) { if (thisCategoryWeight> max) { max = thisCategoryWeight; mainCategory= thisCategory; } } diagnoseReturningWeights Basically just runs categoryWeight() for each category categoryWeight Which basically just adds up the weight of the patients attributes on each dimension for the category in question. See getWeightByDimensionAndAttrAndCategory()
getWeightByDimensionAndAttrAndCategory() //these functions get a list of the number of times all attributes occur in the training set attrCount= allPatients.countAttributesByDimensionAndCategory(dimension, category); allAttrCount= allPatients.countAttributesByDimension(dimension); //these int’s are the actual counts for the attribute we want intattributeCount; intallAttributeCount; attributeCount = attrCount.GetValue(attribute); allAttributeCount = allAttrCount. GetValue(attribute); //This is the weight for the category on the dimension weight = attributeCount / allAttributeCount;