1 / 10

Assured Autonomy: Strategies for Reliable Machine Learning Algorithms

Explore the requirements for robust and efficient ML algorithms in autonomous operations, including real-time execution models, interpretability methods, and robustness under adversarial settings.

zarek
Download Presentation

Assured Autonomy: Strategies for Reliable Machine Learning Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Assured Autonomy: Thoughts and Solution Directions Saurabh Bagchi School of Electrical and Computer Engineering Department of Computer Science Purdue University Presentation at: engineering.purdue.edu/dcsl

  2. Scenario • Battlefields of the future will involve autonomous operations • Among multiple cyber, physical, and kinetic assets • Interactions with humans • Such autonomous operations will rely on machine learning (ML) algorithms • Executing on a distributed set of execution platforms • Training is done offline as well as incrementally in the field • Both training and inference must be robust to noisy data • During training and during runtime operation • Noise may be a natural artifact of the hazardous environment or may be maliciously and strategically introduced • Autonomous algorithms must interface well with humans who may need to act on their decisions • Decisions must be interpretable and explainable • At tactical level in real time and at strategic level offline

  3. Requirements from Assured Autonomy • Requirements apply to the ML algorithms and their practical distributed instantiations • They must be able to tolerate varying levels of noise • They must provide explanations either post-hoc about their decisions or in response to querying of the algorithms • The inferencing part of the algorithms must be capable of real-time results, even under uncertain and contested environments • They must be able to operate both with batch mode training and incremental training • They must provide (probabilistic) guarantees on their performance, with respect to accuracy and latency • These guarantees must hold under adversarial attacks, for a bounded set of adversarial actions • They must be able to ingest heterogeneous sources of data, with different trust levels and different characteristics (streaming, intermittent, etc.)

  4. Solution Directions (1 of 3) • Real-time distributed ML algorithms • Scalable ML execution models on distributed and opportunistic platforms (client, edge, private and public cloud) • Algorithm adapts to available resources in benign and in adversarial situations • Performs data parallel decomposition so that distributed software pieces can work on disjoint data elements (online training or inferencing) • Flexibly trades off the freshness of results to allow progress under adversarial or contested situations • We infer the delay bounds for each phase of the pipeline of autonomous operations and navigate to the right part of the accuracy-latency tradeoff

  5. Solution Directions (2 of 3) • Interpretable ML algorithms • Currently a tension between richness and expressivity of model versus interpretability of model results • Ex: Linear SVM  interpretable versus a deep neural network  not interpretable but can handle a wider variety of function representations • [Work with collaborator David Inouye] We are developing interpretability methods via two main approaches: (a) intrinsic and (b) post-hoc • Intrinsic interpretability: The structure of the model is directly interpretable, e.g. a linear SVM • Post-hoc interpretability: The model is interpreted after training, e.g., estimating most important features of a deep neural network

  6. Solution Directions (2 of 3) • Interpretable ML algorithms (Continued) • Developed a post-hoc interpretability method based on model counterfactuals • Ex: Given a factual target point, what would the model predict if one or several features were altered or changed? • This natural post-hoc interpretability method could be combined with the intrinsically interpretable generalized additive models • These assume most features are independent (as in linear models) • But allow the univariate feature functions to be arbitrarily complex (unlike in linear models) • Thecounterfactual post-hoc method could be combined with our novel approach to deep models that builds it up from intrinsically interpretable models as components (e.g., logistic regression and principal component analysis)

  7. Solution Directions (3 of 3) • Robust algorithms under adversarial settings and uncertainty quantification • Provides uncertainty bounds around results [Work with collaborator Prateek Mittal@Princeton] • Bounds are a function of (i) distance between training and test data; (ii) how far the inference is away from decision boundary; (iii) number of classifiers • Demonstrated for streaming data analytics • Assumptions: (i) Single data stream; (ii) No malicious modification of data • Game-theoretic strategy to determine optimal placement of algorithms • Under different levels of adversarial presence • Under different levels of bounded rationality • Assumptions: No collusion among multiple adversaries • New emerging topic: Approximation of ML algorithms in the presence of adversaries

  8. Technology Challenges, Policy Questions • Technology to improve machine algorithms • Algorithms that take incremental data inputs • Algorithms that trade off accuracy, speed, interpretability • Algorithms that bound uncertainty in their decisions • Autonomous decisions with human interaction • Decisions should be improved with human input • Possibility of correction for all critical decisions • Biases counteracted by crowdsourcing of algorithms and overseers

  9. Relevant Publications • Xu, Ran, Subrata Mitra (Adobe Research), Jason Rahman (Facebook), Peter Bai, Bowen Zhou (LinkedIn), Greg Bronevetsky (Google), and Saurabh Bagchi, “Pythia: Improving Datacenter Utilization via Precise Contention Prediction for Multiple Co-located Workloads,” In Proceedings of the 19th ACM/IFIP International Middleware Conference (Middleware), pp. 146-160, 2018. • Mitra, Subrata, Manish Gupta, Sasa Misailovic (U. of Illinois at Urbana-Champaign), Saurabh Bagchi, “Phase-Aware Optimization in Approximate Computing,” In Proceedings of the 2017 IEEE/ACM International Symposium on Code Generation and Optimization (CGO), pp. 185-196, 2017. • Wood, Paul, Heng Zhang, Muhammad-Bilal Siddiqui, and Saurabh Bagchi. "Dependability in Edge Computing." Accepted to appear in Communications of the ACM (CACM), pp. 1-16, 2019. • Inouye, David I., Liu Leqi, Joon-Sik Kim, Bryon Aragam, and Pradeep Ravikumar, “Counterfactual Curve Explanations”, Under submission to NeurIPS, 2019. • Inouye, David I. and Pradeep Ravikumar. "Deep Density Destructors." In International Conference on Machine Learning, pp. 2172-2180, 2018. • Xu, Ran, Jinkyu Koo, Rakesh Kumar, Peter Bai, Subrata Mitra (Adobe Research), Sasa Misailovic (University of Illinois Urbana-Champaign), and Saurabh Bagchi, "VideoChef: Efficient Approximation for Streaming Video Processing Pipelines," In Proceedings of the 2018 USENIX Annual Technical Conference (USENIX ATC '18), pp. 43−56, 2018. • Mitra, Subrata, Greg Bronevetsky, Suhas Javagal and Saurabh Bagchi, “Dealing with the Unknown: Resilience to Prediction Errors,” At the 24th International Conference on Parallel Architectures and Compilation Techniques (PACT), pp. 331-342, 2015. • Gutierrez, Christopher, Mohammed Almeshekah, Eugene Spafford, and Saurabh Bagchi, “A Hypergame Analysis for ErsatzPasswords,” At the 33rd IFIP TC-11 International Conference on Information Security and Privacy Protection (IFIP SEC), pp. 47-61, 2018. • Abdallah, Mustafa, Daniel Woods, Parinaz Naghizadeh, Xingchen Wang, Bei Guan, Issa Khalil, Tim Cason, Shreyas Sundaram, and Saurabh Bagchi, "BASCPS: Modeling and Evaluating Impact of Behavioral Decision Making in Securing Cyber-Physical Systems," Under review at the 26th ACM Conference on Computer and Communications Security (CCS), pp. 1-18, 2019. • Abdallah, Mustafa, Parinaz Naghizadeh, Ashish Hota, Timothy Cason, Saurabh Bagchi, Shreyas Sundaram, "The Impacts of Behavioral Probability Weighting on Security Investments in Interdependent Systems," Accepted to appear at the American Control Conference (ACC), pp. 1−6, 2019.

  10. Presentation available at:Dependable Computing Systems Lab (DCSL) web siteengineering.purdue.edu/dcsl

More Related