190 likes | 749 Views
ATML on LM-STAR ®. Alicia Helton 407-306-1592 Alicia.Helton@lmco.com. Michelle Harris 407-306-6693 Michelle.L.Harris@lmco.com. Steven O’Donnell 407-306-4325 Steven.J.O’Donnell@lmco.com. Introduction. Implemented a set of ATML schemas on LM-STAR ® Schemas used –
E N D
ATML on LM-STAR® Alicia Helton 407-306-1592 Alicia.Helton@lmco.com Michelle Harris 407-306-6693 Michelle.L.Harris@lmco.com Steven O’Donnell 407-306-4325 Steven.J.O’Donnell@lmco.com
Introduction • Implemented a set of ATML schemas on LM-STAR® • Schemas used – • TestDescription ML (draft 5.0) • TestResults ML (version 0.15) • Diagnostic ML • Bayes • Common Element Model (CEM) • Dynamic Context Model (DCM – version 0.07)
Task Definition • Convert a legacy CASS ATLAS TPS into ATML TestDescription. • Use TestDescription as input to the SELEX TPS Wizard™ and generate TestStand™ sequences. • Execute the TestStand™ sequences on the LM-STAR®. • Collect measured values using ATML TestResults. • Interface with diagnostic reasoner to isolate to the fault more quickly and more accurately.
Initial Approach • Use an externally developed tool to convert ATLAS to Intermediate XML • Use XML tools to transform the Intermediate XML to TestDescription • TestDescription will provide the “what” to do information for the TPS • Use the TPS Wizard™ to generate TestStand™ sequence files capable of being run on LM-STAR®. ATLAS Intermediate XML ATML Test Description TestStand Sequence Files
Issues • Legacy ATLAS TPS was not designed to maximize portability • Intermediate XML generated from ATLAS was very flat • Difficult to understand test flow and translate into TestDescription • Legacy ATLAS TPS didn’t adhere to style guide which would have enforced specific design rules • Multiple fault callout permutations based on data evaluations made without test numbers created problems in the diagnostic model development
Revised Approach • An application was developed to extract the “what” to do information from the ATLAS and save it to a spreadsheet. • Human intervention verified the information and added missing values. • An application was written to convert the spreadsheet to TestDescription.
TestDescription Sample <Outcomes> <Outcome ID="0_1" value="Passed"/> <Outcome ID="0_2" value="Failed"/> <Outcome ID=" DIAGN1" value="Failed"> <ReplaceComponents> <ReplaceComponent uutComponentId="UUT-0"/> </ReplaceComponents> </Outcome> <Outcome ID="DIAGN1" value="Failed"> <ReplaceComponents> <ReplaceComponent uutComponentId="UUT-1"/> </ReplaceComponents> </Outcome> ------snipped---------- <Step xsi:type="Step_Test" ID="Step_2" testId="2000"> <Results> <Result xsi:type="Result_Test" testOutcomeId="2000A"> <NextStep stepId="Step_3"/> <!-- 2020 --> </Result> • Using the information from TestDescription, the Selex TPS Wizard™ builds the frame of the new TPS with initiated variables, test criteria, simulation mode, pre and post conditions, and calls to “how-to” sequences.
TestDescription to LM-STAR® • Needed to create the “how-to” TestStand™ Sequences • Highly intensive manual task • Simplified through the use of Custom Steps • Graphical interface to LM-STAR® system software
Diagnostic Model • Description • Model is based off the Bayesian and Common Element Models from the AI-ESTATE standard • Stored in XML format derived from the AI-ESTATE models • Development of Model • Start with the fault tree of the TPS • Use historical test results and maintenance data to add more intelligence to the Model • Learning algorithms are used to continuously feed back newly discovered test results (in TestResults ML format) and maintenance data
Diagnostic Reasoner • Provides run-time environment for using the diagnostic models • Implements the AI-ESTATE interface to the diagnostic models • Uses the Dynamic Context Model to track session information • Allows for back-tracking through session • Allows restart of Session from previous stopping point • Provides a set of “higher-order” interface functions to minimize required calls for accessing model/reasoner data • Web-service based interface (using WSDL) • Utilizes a Bayesian Network Analyzer called SMILE • By Decisions System Laboratory – Univ. of Pittsburgh
Lessons Learned • Our current process is still heavily dependent on manual intervention. • Very time-consuming • Current legacy TPSs are implemented with tight coupling making it difficult to separate the “what” and “how” information • Other ATML schemas such as UUT Description and TestAdapter could aid in the porting process • They were not mature enough at the time the task started • Would be more cost effective to implement UUT test requirements on new systems as opposed to re-hosting the application • Not always a one-to-one test mapping from TPS to Diagnostic Model
Conclusion • Industry needs tools that can generate and consume ATML that could be exported to C, ATLAS etc • Using IEEE-1641 for Signal and Test Definition appears promising and further study by Lockheed Martin is planned • Lockheed Martin is embracing ATML • TestResults ML is deployed on LM-STAR® systems supporting the JSF program • As ATML matures, Lockheed Martin is prepared to implement this technology into our legacy and future programs
Questions ? Alicia Helton 407-306-1592 Alicia.Helton@lmco.com Michelle Harris 407-306-6693 Michelle.L.Harris@lmco.com Steven O’Donnell 407-306-4325 Steven.J.O’Donnell@lmco.com