1 / 33

Laufzeitgarantien für Echtzeitsysteme Reinhard Wilhelm Saarbrücken

Laufzeitgarantien für Echtzeitsysteme Reinhard Wilhelm Saarbrücken. Zeit in der Informatik. (Fast) alle Informatiker abstrahieren von der physikalischen Zeit (Ausführungs-)Zeit wird gezählt in Zahl von Schritten eines Algorithmus/Programms Jeder Schritt braucht eine Zeiteinheit

arden
Download Presentation

Laufzeitgarantien für Echtzeitsysteme Reinhard Wilhelm Saarbrücken

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Laufzeitgarantien für EchtzeitsystemeReinhard WilhelmSaarbrücken

  2. Zeit in der Informatik (Fast) alle Informatiker abstrahieren von der physikalischen Zeit • (Ausführungs-)Zeit wird gezählt in Zahl von Schritten eines Algorithmus/Programms • Jeder Schritt braucht eine Zeiteinheit • Komplexitätsklassen fassen Probleme und Algorithmen zusammen, die größenordnungsmäßig gleich lang brauchen - “Constants don’t matter!” • Typische Aussage, Quicksort braucht O(nlog n) Schritte

  3. Seitenairbag im Auto, Reaktion in <10 mSek Flügel-Vibrationen, Sensor-Periode 5 mSek Harte Echtzeit Systeme mit harten Echtzeitanforderungen, oft in sicherheitskritischen Anwendungen trifft man überall- in Flugzeug, Auto, Zug, Fertigungssteuerung

  4. Harte Echtzeit • Eingebettete Steuerung (embedded control): Rechnersystem steuert einen technischen Prozess • Reaktionszeiten vom zu steuernden System diktiert • Entwickler muss Laufzeitgarantieen abgeben • Dazu muss man sichere obere Schranken für die Laufzeit aller Tasks des Systems berechnen • Oft fälschlicherweise Worst-Case Execution Time (WCET) genannt • Analog, Best-Case Execution Time (BCET)

  5. Worst-case guarantee Lower bound Upper bound t Worst case Best case Basic Notions

  6. Industrial Practice • Measurements: computing maximum of some executions. Does not guarantee an upper bound to all executions • Measurement has acquired a bad reputation, is now called “observed worst-case execution time”.Heavily used outside of Old Europe.

  7. Modern Hardware Features • Modern processors increase performance by using: Caches, Pipelines, Branch Prediction • These features make WCET computation difficult:Execution times of instructions vary widely • Best case - everything goes smoothly: no cache miss, operands ready, needed resources free, branch correctly predicted • Worst case - everything goes wrong: all loads miss the cache, resources needed are occupied, operands are not ready • Span may be several hundred cycles

  8. 6 3 (Concrete) Instruction Execution mul Execute Multicycle? Retire Pending instructions? Fetch I-Cache miss? Issue Unit occupied? 4 1 3 30 1 s1 3 s2 41

  9. Timing Accidents and Penalties Timing Accident – cause for an increase of the execution time of an instruction Timing Penalty – the associated increase • Types of timing accidents • Cache misses • Pipeline stalls • Branch mispredictions • Bus collisions • Memory refresh of DRAM • TLB miss

  10. Fighting Murphy’s Law in WCET • Naïve, but safe guarantee accepts Murphy’s Law: Any accident that may happen will happen • Example: A. Rosskopf, EADS Ottobrunn, measured performance of PPC with all the caches switched off (corresponds to assumption ‘all memory accesses miss the cache’)Result: Slowdown of a factor of 30!!! • Desirable: a method to exclude timing accidents • The more accidents excluded, the lower the WCET

  11. Static Program Analysis • Determination of invariants about program execution at compile time • Most of the (interesting) properties are undecidable => approximations • An approximate program analysis is safe, if its results can always be depended on. Results are allowed to be imprecise as long as they are on the safe side • Quality of the results (precision) should be as good as possible

  12. Approximation True Answers yes no

  13. Approximation Safe True Answers yes? no! Precision

  14. Safety and Liveness Properties • Safety: „something bad will not happen“Examples: • Evaluation of 1/x will never divide by 0 • Array index not out of bounds • Liveness: „something good will happen“Examples: • Program will react to input, • Request will be eventually served

  15. Analogies • Rules-of-Sign Analysis : VAR -> {+,,0, ,T}Derivable safety properties from invariant (x) = + : • sqrt(x) No exception:sqrt of negative number • a/x No exception:Division by 0 • Must-Cache Analysis mc: ADDR -> CS x CLDerivable safety properties:Memory access will always hit the cache

  16. Natural Modularization • Processor-Behavior Prediction: • Uses Abstract Interpretation • Excludes as many Timing Accidents as possible • Determines WCET for basic blocks (in contexts) • Worst-case Path Determination • Codes Control Flow Graph as an Integer Linear Program • Determines upper bound and associated path

  17. Executable program Path Analysis AIP File CRL File PER File Loop bounds WCET Visualization Loop Trafo LP-Solver ILP-Generator CFG Builder Evaluation Value Analyzer Cache/Pipeline Analyzer Overall Structure Static Analyses Processor-Behavior Prediction Worst-case Path Determination

  18. Analysis Results (Airbus Benchmark)

  19. Interpretation • Airbus’ results obtained with legacy method:measurement for blocks, tree-based composition, added safety margin • ~30% overestimation • aiT’s results were between real worst-case execution times and Airbus’ results

  20. Caches: Fast Memory on Chip • Caches are used, because • Fast main memory is too expensive • The speed gap between CPU and memory is too large and increasing • Caches work well in the average case: • Programs access data locally (many hits) • Programs reuse items (instructions, data) • Access patterns are distributed evenly across the cache

  21. Speed 8 4 CPU (1.5-2 p.a.)  2x every 2 years 2 DRAM (1.07 p.a.) 1 years 0 1 2 3 4 5 Speed gap betweenprocessor & main RAM increases P.Marwedel

  22. Caches: How the work CPU wants to read/write at memory address a,sends a request for a to the bus Cases: • Block m containing a in the cache (hit): request for a is served in the next cycle • Block m not in the cache (miss):m is transferred from main memory to the cache, m may replace some block in the cache,request for a is served asap while transfer still continues • Several replacement strategies: LRU, PLRU, FIFO,...determine which line to replace

  23. Address prefix Set number Byte in line A-Way Set Associative Cache CPU Address: Compare address prefix If not equal, fetch block from memory Main Memory Byte select & align Data Out

  24. Access m4 (miss) m4 m0 m1 m2 Access m1 (hit) m1 m4 m0 m2 Access m5 (miss) m5 m1 m4 m0 LRU Strategy • Each cache set has its own replacement logic => Cache sets are independent: Everything explained in terms of one set • LRU-Replacement Strategy: • Replace the block that has been Least Recently Used • Modeled by Ages • Example: 4-way set associative cache m0 m1 m2 m3

  25. Cache Analysis How to statically precompute cache contents: • Must Analysis:For each program point (and calling context), find out which blocks are in the cache • May Analysis: For each program point (and calling context), find out which blocks may be in the cacheComplement says what is not in the cache

  26. Must-Cache and May-Cache- Information • Must Analysis determines safe information about cache hitsEach predicted cache hit reduces WCET • May Analysis determines safe information about cache missesEach predicted cache miss increases BCET

  27. Example: Fully Associative Cache (2 Elements)

  28. concrete “young” z y x t s z y x Age “old” s z x t z s x t [ s ] abstract { s } { x } { t } { y } { x } { } { s, t } { y } [ s ] Cache with LRU Replacement: Transfer for must

  29. { a } { } { c, f } { d } { c } { e } { a } { d } “intersection + maximal age” { } { } { a, c } { d } Cache Analysis: Join (must) Join (must) Interpretation: memory block a is definitively in the (concrete) cache => always hit

  30. concrete “young” z y x t s z y x Age “old” s z x t z s x t [ s ] abstract { s } { x } { } { y, t } { x } { } { s, t } { y } [ s ] Cache with LRU Replacement: Transfer for may

  31. Cache Analysis: Join (may) Interpretation: memory block s not in the abstract cache => s will definitively not be in the (concrete) cache => always miss

  32. Acknowledgements • Christian Ferdinand, whose thesis started all this • Reinhold Heckmann, Mister Cache • Florian Martin, Mister PAG • Stephan Thesing, Mister Pipeline • Michael Schmidt, Value Analysis • Henrik Theiling, Mister Frontend + Path Analysis • Jörn Schneider, OSEK • Marc Langenbach, trying to automatize

  33. Recent Publications • R. Heckmann et al.: The Influence of Processor Architecture on the Design and the Results of WCET Tools, IEEE Proc. on Real-Time Systems, July 2003 • C. Ferdinand et al.: Reliable and Precise WCET Determination of a Real-Life Processor, EMSOFT 2001 • H. Theiling: Extracting Safe and Precise Control Flow from Binaries, RTCSA 2000 • M. Langenbach et al.: Pipeline Analysis for the PowerPC 755, SAS 2002 • St. Thesing et al.: An Abstract Interpretation-based Timing Validation of Hard Real-Time Avionics Software, IPDS 2003 • R. Wilhelm: AI + ILP is good for WCET, MC is not, nor ILP alone, VMCAI 2004 • A. Rhakib et al.: Component-wise Data-cache Behavior Prediction, WCET 2004 • L. Thiele, R. Wilhelm: Design for Timing Predictability, submitted

More Related