1 / 24

Master thesis: Using Machine Learning Methods for Evaluating the Performance

Evaluation metric plays an important role in obtaining the best possible classifier in the classification training. Contact:<br>www.tutorsindia.com<br>info@tutorsindia.com<br>(WA): 91-8754446690 <br>(UK): 44-1143520021<br><br>

tutorsindia
Download Presentation

Master thesis: Using Machine Learning Methods for Evaluating the Performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PERFORMANCE EVALUATION METRICS FOR MACHINE- LEARNING BASED DISSERTATION An Academic presentation by Dr. Nancy Agnes, Head, Technical Operations, Tutors India Group  www.tutorsindia.com Email: info@tutorsindia.com

  2. Today's Discussion OUTLINE Abstract Introduction Evaluation of Machine Learning Performance measures of ML Bayesian Inference Recommended Algorithms Future Topics Conclusion

  3. Abstract Evaluation metric plays an important role in obtaining the best possible classifier in the classification training. Thus, choosing an appropriate evaluation metric is an essential key for obtaining a selective and best possible classifier. The associated evaluation metrics have been reviewed systematically that are specifically considered as a discriminator for optimizing a classifier. In general, many possible classifiers use accuracy as a measure to classify the optimal solution during the classification evaluation. Contd...

  4. Thus, the measurement device that measures the performance of a classifier is considered as the evaluation metric. Different metrics are used to evaluate various characteristics of the classifier induced by the classification method. Contd...

  5. Introduction An important aspect of the Machine Learning process is performance evaluation. The right choice of performance metrics is one of the most significant issues in evaluating performances. It is also a complex task. Therefore, it should be performed cautiously in order for the machine learning application to be reliable. Accuracy is used to assess the predictive capability of a model on the testing samples. Contd...

  6. Machine learning and data mining are the fields that use this major metric. Another alternate metric that has been used in pattern recognition and machine learning is the ROC curve. Thus, there are many performance metrics that have been developed for assessing the performance of ML algorithms. 1

  7. Evaluation of Machine Learning The evaluation of categorized tasks is usually done by dividing the data set into a training data set and a testing data set. The machine learning method is then trained on the first set of data, while the testing data set calculates the performance indicators to assess the quality of the algorithm. ML algorithm’s common issue lies in accessing the limited testing and training data. Thus, overfitting can be a serious issue when assessing these programs. In order to tackle this problem, a common method is, to employ an X-Fold Cross-Validation. Contd...

  8. The cross-Validation method describes the process of dividing the entire data set into X parts and employing each set consecutively as the test data set while merging the other sets to the training data. Then the performance indicators are normalized overall validation processes. There is no ideal performance indicator for every topic that concerns the evaluation of machine learning algorithms since every method has its own flaws and advantages. 3 Contd...

  9. Image source: Evaluating Learning Algorithms 8

  10. Performance measures of ML A. CONFUSION MATRIX The performance of a classification problem can be measured easily using this metric. Here, the output can be of two or more classes. A confusion matrix is a table with two dimensions i.e., “Actual” and “Predicted” and also, both the dimensions have “True Positives (TP)”, “True Negatives (TN)”, “False Positives (FP)”, “False Negatives (FN)” Contd...

  11. Contd...

  12. B. ACCURACY Accuracy is a metric to measure the accuracy of the model. Accuracy = Correct Predictions / Total Predictions Accuracy is the simplest performance metric. Contd...

  13. C. PRECISION & RECALL Precision is the ratio of True Positives (TP) and the total positive predictions. The recall is a True Positive Rate. All the positive points that are predicted positive are explained here. The mean of precision and recall is termed as F measure. Contd...

  14. D. ROC & AUC ROC is a plot between True Positive Rate and False Positive Rate that is estimated by taking several threshold values of probability scores from the reverse sorted list given by a model.

  15. Bayesian Inference The recent development in machine learning has led many IT professionals to focus mainly on accelerating associated workloads, especially in machine learning. However, in the case of unsupervised learning, the Bayesian method often works better than machine learning with a limited or unlabelled data set, and can influence informative priors, and also have interpretable approaches. Bayesian inference model has become the most popular and accepted model over the years as it is a huge compliment to machine learning. Contd...

  16. Some recent revolutionizing research in machine learning accepts Bayesian techniques like generative Bayesian neural networks (BNN), adversarial networks (GAN), and variational autoencoder.

  17. Recommended Algorithms Through visual assessment, it has been proved that naive Bayes was the most successful algorithm for evaluating programming performance. Many detailed analyses were carried out statistically to find out if there were any considerable differences between the estimated accuracy of each of the algorithms. This is important as involved parties may prefer for choosing an algorithm that they would like to execute and must know if the use of such algorithm(s) would result in a significantly lower performance evaluation. Contd...

  18. The analysis identified that all of the ML algorithms, naive Bayes had comparably best performance evaluation and thus could be used to assess the performance of ML dissertation. Naive Bayes has been recommended as the best choice for predicting program performance. 5

  19. Future Topics 1. EVALUATING AND MODIFYING PERFORMANCE MEASUREMENT SYSTEMS. Performance measurement has become an emerging field during the last decades. Organizations performance measures but the most crucial one would be that they increase productivity when utilized properly. have many motives for using 2. PERFORMANCE ENHANCEMENT a technique to support performance enhancement in industrial operations. Contd...

  20. The main of this research is to: Build and assess a method that supports performance enhancement in industrial operations. This is performed through many case studies and literature research. The outcome is a systematically evaluated method for Performance Improvement. 3. DETERMINING PERFORMANCE MEASURES OF THE SUPPLY CHAIN prioritizing performance measures The main aim is to decrease costs and boost the profitability of organizations to thrive in the market of competition. Contd...

  21. 4. MEASUREMENT METHODS. A CURRENT STATE ANALYSIS TECHNIQUE FOR PERFORMANCE Many organizations use the performance measurement (PM) method to support operational management and strategic management processes. This is chiefly important as it leads to modifications in organization strategy and PM systems. 5. DYNAMIC PERFORMANCE MEASUREMENT METHODS: A FRAMEWORK FOR ORGANIZATIONS Approaches are dynamic naturally, while the current measurement systems are predictable and stable. Merging strategies with measurement methods is absurd and has created issues for organizations as the strategic framework modifies.

  22. Conclusion Improving the evaluation performance of an emerging workload, the most proficient way is to make use of existing systems. Another important research implemented is generic Bayesian frameworks for GPUs. As of now, Bayesian inference is considered the best combination of algorithm and hardware platform for performance evaluation. Performance generalization accuracy of a model in future unknown data. evaluation aims to approximate the Contd...

  23. In future research, research work can be carried out to improve the evaluation metrics even further. It would be better to test those metrics on various Machine Learning cloud services to assess the services, to check how easy it is to use the metrics, and what type of data can be obtained using the metrics. Research work must be carried out in this direction to build a framework that would help in prioritizing the metrics and identify a set of conditions to join results from various metrics. 6 Contd...

  24. CONTACT US UNITED KINGDOM +44-1143520021 INDIA +91-4448137070 EMAIL info@tutorsindia.com

More Related