460 likes | 475 Views
Software Benchmarking Results V. Husson. Benchmark Report Card (Orbit Comparisons). Benchmark Report Card (Residual and Correction Comparisons). Benchmark Report Card (SINEX File Comparisons). Orbit Definitions. Orbit A - Nominal Model (Initial orbit, NOTHING adjusted during the run)
E N D
Orbit Definitions • Orbit A - Nominal Model (Initial orbit, NOTHING adjusted during the run) • Orbit B - Fixed EOP and Station Coordinates (Iterated orbit, ONLY orbit adjustment!) • Orbit C - Final Orbit (ALL adjusted: Orbit, Station Positions, Biases, EOP)
Orbit B Orbit A Radial Comparisons(Orbit A & B) CRL-CSR IAAK-CSR JCET-CSR NERC-CSR AUSLIG-CSR NASDA-CSR
JCET - ASI GEOS - ASI JCET - GEOS GEODYN Residual Analysis Orbit A B C
NASDA Residual (Orbit B) Large residuals on 1st 3 normal points on Nov 1
Center of Mass Corrections • Everyone is using .251 meters for LAGEOS • JCET CoM corrections in their V4 .cor files are in error (these files state a CoM of .252 vs .251m, but .251m was actually used) • Software changes may be necessary to accommodate system dependent LAGEOS CoM corrections.
Lessons learned • Need to specify minimum resolution of parameters to be compared • Need clearer definition of model standards
Future Software Modificationsthat may require benchmark testing • Station dependent CoM corrections • Use Bias File for apriori Biases • Mutli-color data capability • Weight data based on #obs/bin
Recommendations • QC your own files before you submit your solution • Report all range corrections in the residual file to at least 0.000001 meter (i.e. 0.001 millimeters) • Need to verify if any problems found in the benchmark will impact the corresponding POS/EOP solution(s) • Put benchmarking presentations on-line, ASAP • Distribute finding to ACs not in attendance, ASAP • In the POS/EOP pilot project, submit at least 1 solution with the .orb and .res files to ensure problems identified in the benchmark do not “sneak” back in.
What’s next • What analysis should be performed that has not yet been performed? • Establish pass/fail criteria for report card • Test for time-of-flight and epoch rounding/truncation issues • Do we need to modify our modeling requirements? • Should we test and isolate any particular types of models (e.g. range bias estimation, along track acceleration) • Should we expand the dataset to include LAGEOS-2 and/or Etalon, other satellites? • SP3 format for Orbits (are we ready?) • Separate Orbits and Software Benchmarking? • Document and distribute results • List action items • Anything else??