1 / 16

Reverse Data Synthesis : Infinite System Learning from a Finitely Large Provided Dataset

Reverse Data Synthesis : Infinite System Learning from a Finitely Large Provided Dataset. Brief Summary of the Research.

manju
Download Presentation

Reverse Data Synthesis : Infinite System Learning from a Finitely Large Provided Dataset

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reverse Data Synthesis:Infinite System Learning from a Finitely Large Provided Dataset

  2. Brief Summary of the Research An Artificial Intelligence(AI) framework was designed which takes finite training data sets as input, and uses it to synthesize an infinite data set, in which each data represents a unique complexity. A unique concept of a difficulty mesh was used, such that every data synthesized exhibited a unique complexity(one which the system did not know how to classify). The iteration, on repeating infinitely leads to infinite learning from a finite data set which was previously perceived as being impossible to achieve in AI. Tests on universally recognized data sets yielded a perfect accuracy in each case, and would go on to until infinity- hence revolutionizing AI.

  3. Problem • Inaccurate Artificial Intelligence classifications • The more training data we provide a system, the more it learns • No amount of training data is 'enough' • For achieving a perfect accuracy, an infinitely large, unique training data set would be required • Infinitely large data sets are not available

  4. Background Research Data synthesis today cannot achieve a perfect accuracy in AI because, the data synthesized: • Is not unique- does not teach the system anything new • Not correctly labeled

  5. Objective To design an AI framework: • To take a finite training data set as input, and synthesize an infinitely large training data set • Each data synthesized, must be correct & teach the system something new • Hence, achieve a perfect accuracy in all AI classifications

  6. Methodology • To use a modelwhich plots a relationship between the amount of processing and the level of similarityto the counter class. • The level of similarity would tell us how to synthesize the data(composition of the data), while the unique levels of processing would ensure that the new data is unique and correctly labeled.

  7. The Difficulty Mesh • The Difficulty mesh was used as model for data synthesis. It represents the minimum level of difficulty(processing) that would be been reqd. in correctly classifying a piece of data. • Thus, data that acquireunique locationson the mesh, obviouslyexhibit unique complexities(complexities that the system did not know how to classify before).

  8. Data on the Difficulty Mesh Two pieces of data on the mesh, as perceived by the system: Thus, according to the system, as the level of Difficulty(D) rises, the level of Similarity(S) to the counter class also rises.

  9. Possible D(Difficulty) vs S(Similarity) Graphs Fig: 1 Fig: 2 Fig 1:Only non contradictory graph is graph 4, since D and S rise uniformly, and hence graph 1,2,3 are ruled out Fig 2:Illustration of how difficulty rises on the mesh

  10. Conclusion of D vs S relation • Thus, the angle of inclination of a point on the difficulty mesh indicates the level of similarity that the data must exhibit so as to acquire that location on the difficulty mesh. But...

  11. But, level of similarity to which Reference Data Set? ie: The level of similarity to which state of the counter class?

  12. The non-contradictory Data Set • The hack is, unlike conventional systems, the angle of inclination doesn't indicate the similarity to a fully saturated(entire) counter class. • Instead to the existing, provided data set of the counter class(since the difficulty mesh collapses when fully saturated with data as it will look out for direct data matching hence nullifying the amount of processing).

  13. Characteristics of the Synthesized Data • Exhibits a unique complexity, since every piece of data on the mesh represents a unique complexity • Correctly labelled, since it would have required that level of difficulty in being Correctly classified • The iteration can be repeated infinite number of times, and achieve infinite system learning from a finite data set

  14. Contradictory Analysis of Synthesized Data • "The system chose a reference data set which is otherwise used to design data of the same complexity. However, in spite of that it ended up synthesizing data of a unique complexity" • This is justified because it is completely possible for a random arrangement(noise) to end up assembling a unique complexity unknowingly. This is exactly what has happened in this case, and so it can't be contradicted.

  15. Testing Results [While we provided the system with only 100 training samples in each case, it used the data synthesis procedures described before to generate thousands of pieces of unique data to train itself] We know that the system will go to achieve this perfect accuracy in all AI classifications, even when tested against infinite data sets(since the system learns infinitely).

  16. Conclusion • In each of the tasks the system correctly classified all test data provided to it, in contrast to otherwise imperfect accuracies of these algorithms under other frameworks. • The testing results significantly prove achievement of desired objectives, since the system was able to correctly classify any data on which the classification features were exhibited on the data itself • The system will go to achieve this perfect accuracy in all AI classifications, even when tested against infinite data sets(since the system learns infinitely) • Thus, it marks a revolution in the field of AI

More Related