1 / 13

International Space Station Passive Thermal Control System Analysis Top Ten Lessons-Learned

This paper presents the top ten lessons-learned from analyzing the International Space Station's passive thermal control system. Key topics include verification, temperature limits, optics, fidelity vs. computation time, and more. The lessons provide valuable insights for thermal design and analysis in space missions.

nzimmerman
Download Presentation

International Space Station Passive Thermal Control System Analysis Top Ten Lessons-Learned

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TFAWS Paper Session International Space Station Passive Thermal Control System AnalysisTop Ten Lessons-Learned Presented By John Iovine NASA Johnson Space Center ES3/Thermal Design Branch Thermal & Fluids Analysis WorkshopTFAWS 2011August 15-19, 2011 NASA Langley Research CenterNewport News, VA

  2. Top Ten Lessons-Learned #1) Verification #2) Temperature Limits #3) Optics #4) Fidelity vs. Computation Time #5) Model All the Physics #6) Model Nominal and Off-Nominal #7) Thermal-Structural Analysis #8) Uncertainty #9) Telemetry #10) Operations and Sustaining Engineering ISS ~ Assembly Complete TFAWS 2011 – August 15-19, 2011

  3. #1) Verification • Analysis Plan • Project (e.g. GFE) and Program-level consistency • Template requirements, including Concept of Operations or Design Reference Mission • Verification methods, analysis and test, including test program specifics • Specific system (hardware owner) unique requirements • Crew interface, thermal-structural, tracking systems, fluids, robotics, etc. • Comprehensive analysis plan • Team-level • Consider program-level thermal control and analysis plan • Temperature screening limits • Complete critical nodes list with appropriate certification limits • Need more rigorous model peer-review • Schedule independent model reviews (not just analysis reviews) in design cycles • Ensure models/documentation contain sufficient heritage design knowledge • Fully check-out models delivered prior to integration • Model delivery requirements • Consistency across prime contractor, partners, GFE, payloads • Consistency with Concept of Operations or Design Reference Mission, i.e. configurations • Detailed model for design verification closures, sustaining engineering, anomaly resolution • Reduced model for integration • Avoid proprietary limitations TFAWS 2011 – August 15-19, 2011

  4. #2) Temperature Limits • Correspondence to model critical nodes • Limits consistent with certification to have direct correspondence to model nodes • Model documentation • Ideally, all model surfaces and components should have limits identified • Correspondence to flight sensor locations including definition of off-sets • Need improved documentation, updates, control • Certified limits for all op and non-op modes, survival, start-up • References to vendor spec, tests • Interpretation of limits, e.g. box-level vs. components • Explanation of margins, qualification, proto-flight, and exceptions • Structural aspects • Fluids • Life aspects • Exceeding limits • Need to understand how hardware was certified/tested • Explanation of risk if limit exceeded • Risk trade, exceptions, waivers, program decision TFAWS 2011 – August 15-19, 2011

  5. #3) Optics • Need robust management of surface treatment optical properties, solar absorbtance and emittance • Beginning-of-life (BOL) should be based on measurements, build tolerances, and additional bias as warranted for design verification (e.g. “cold bias”) • Ideally, mission analysis should be based on measurement of flight hardware prior to launch (“nominal BOL”) • End-of-life (EOL) should be based on applicable test data and expected degradation sources, UV, AO, contamination, and additional bias as warranted for design (e.g. “hot bias”) • Ideally, degradation vs. time should be defined • Analysis plan standards for consistent usage • Optics should be considered in specifications • May need to reconfirm via measurement/test after material and process changes • Designs/models should also consider internal or covered surfaces • Surfaces may be exposed for maintenance or replacement TFAWS 2011 – August 15-19, 2011

  6. #4) Fidelity vs. Computation Time • Detailed, design verification models • Maximize the detail and optimize the computation time • Fidelity driven by requirements verification may be insufficient for sustaining engineering, e.g. failure response and anomaly resolution • Endeavor to model all the physics that impact thermal response • Reduced, integration-level models • Needed to minimize computation time in higher level models • Benchmark to detailed models as necessary • Block/shade only models may be appropriate TFAWS 2011 – August 15-19, 2011

  7. #5) Model All the Physics • Many errors discovered to be a result of over-simplification or missing detail • Some cases a result of integration and computation limitations • Consider thermal-structural and fluids analysis needs • Electrical • All dissipation loads, including standby, parasitic • Sensor locations, off-sets to coldest and hottest locations of interest • Thermostat locations • Heater architecture • Physical • Symmetry/asymmetry • Discrete heat transfer paths • Thermal cover configuration • Often model and/or hardware not per drawing • Need inspections/walk-downs of flight hardware • Account for external interfaces and fixtures (e.g. GFE provided) • Exposed surfaces • Exposed connectors • Surfaces under temporary (launch) covers TFAWS 2011 – August 15-19, 2011

  8. #6) Model Nominal and Off-Nominal • Requirements and analysis should address both • Concept of Operations or Design Reference Mission should help drive requirements • Flight attitudes, off-nominal capability, “any attitude”, drift • Rotating elements: tracking, bias, parking/feathering, proximity operations • Powerdowns, redundancy, separation of loads (requirements) • Fluid loop shutdown, loss of cooling, stagnant fluid, hydraulically locked lines • Temporary, e.g. launch, configurations • Removal and Replacement (R&R) • Temporary cover removal • Temporary surface exposure • Installation tolerances • Identify potential constraints • Guide early operations planning TFAWS 2011 – August 15-19, 2011

  9. #7) Thermal-Structural Analysis • Thermal model mesh must consider mapping to structural model • Structural interfaces • ICD, mechanical and thermal loads • Temperature differential requirements • Mechanisms • Bearings • Installation temperature tolerance, as-launched vs. on-orbit • Need a better method to screen temperature data and trends to identify thermal loads worst-case and trends • Capability to generate/approximate as-flown temperature history for fatigue, life questions • Analysis plan TFAWS 2011 – August 15-19, 2011

  10. #8) Uncertainty • Uncertainty approach must be baselined early • ISS model validation consisted of bounding assumptions, peer-review, some independent assessments, and limited test data • ISS attempted to introduce formal uncertainty approach into sustaining engineering after Columbia accident but cost was deemed prohibitive • Engineering parameters as a practical approach • Critical parameters and interfaces • Ideally, validate critical heat transfer paths by test • Engineering, qualification, acceptance/flight unit tests, component-level tests • Bounding assumptions • Parametric sensitivity • Nominal vs. worst-case prediction • Natural environment OLR/albedo specific application • Model usage limitations • On-orbit model validation • Large model errors can be discovered • Can be limited by lack of design environment exposure TFAWS 2011 – August 15-19, 2011

  11. #9) Telemetry • More is better • Consider both flight operation and model validation needs • Performance trending including external surfaces (degradation insight) • Sensor error management • Errors/off-sets to be book kept in operational limits • Error vs. off-set • Off-set is not an error, it is a known delta between sensor location and area of interest • Off-set should be derived from test/analysis • Consider testing to confirm/reduce errors • Need to reconcile large errors, which may be inconsistent with certification margin in design cases • Avoid error “double-booking” • Occurs when operation limit and analysis both account for sensor error • If analysis also includes error, an operational constraint could “double-book” error TFAWS 2011 – August 15-19, 2011

  12. #10) Operations & Sustaining Engineering • Mission Analysis • Cover the launch window dates at a minimum • Need credible constraints • Consider nominal timeline • Contingency capability for key events • Powerdowns, hand-offs, parking, pre-install • Consider time-of-year solar heating as warranted • Launch date chits for updates/refinements • Event-specific response • Chits • Short-turnaround time, configuration-specific, date-specific • Anomaly resolution and failure response • Model suitability • Near-term and long-term assessments • Telemetry • Insight • Redundancy • Performance trending TFAWS 2011 – August 15-19, 2011

  13. It’s been 16 years since the first shuttle/Mir docking, and nearly 13 years since the first ISS assembly flight 2A, but with ISS extension to at least 2020, this is may be just the halfway point, so more lessons coming… Shuttle Docked to ISS, Assembly Complete TFAWS 2011 – August 15-19, 2011

More Related