1 / 32

S-44 and USACE Standards

S-44 and USACE Standards. Depth Stuff Only. USACE – Single Beam. QA CROSS LINE PERFORMANCE TEST:.

iren
Download Presentation

S-44 and USACE Standards

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. S-44 and USACE Standards Depth Stuff Only

  2. USACE – Single Beam • QA CROSS LINE PERFORMANCE TEST: Note that in the 1 Jan 02 version of EM 1110-2-1003, it calls for the Resultant Elevation/Depth Accuracy. There is no way to compute this unless you have a lock chamber. We can computer the Resultant Repeatability, which is what most surveyors wind up using.

  3. S-44 – Single Beam & Multibeam • Formulae in meters:

  4. Some terms: • Accuracy: How close is your measurement to the true value? • Repeatability: How good are your chances of going out and measuring the same value? • Uncertainty: The range about a depth measurement that should contain the true value at a certain confidence level (usually 95%). • Confidence Level: The probability that the true value lies within a specified uncertainty from the measured value. • Measured Depth = 25.0’ • Uncertainty at a 95% Confidence Level = 1.0’ • We have a 95% chance that the true sounding is between 24.0’ and 26.0’. • The smaller the uncertainty, the more ‘confident’ you are that your measured value reflects the true value.

  5. Uncertainty: • How can you compute uncertainty: • Uncertainty can be based on either limitations of the measuring instruments (think TPU) or from statistical fluctuations in the quantity being measured. • Limitations of the Measuring Instruments: • By determining the measurement errors in each sensor and factoring how those errors propagate throughout the system, you may compute uncertainty values for your position and depth measurements. • TVU = Total Vertical Uncertainty • THU = Total Horizontal Uncertainty • TPU = Total Propagated Uncertainty • Statistical Fluctuations in the Quantity Being Measured. • Determine the differences between your survey Z- values compared to: • A known depth (Lock Chamber or Bar Check) • A surveyed test area (Performance Test) • A previously surveyed data set (Cross Check Statistics)

  6. S-44 vs USACE: Some Numbers

  7. Tests to Check Compliance

  8. Lock ChamberSingle Beam • Make a note of the depth in the lock chamber (or the depth of the bar). • SINGLE BEAM EDITOR: • Load your data file. • Edit spikes as you normally would. • Click ‘File – Offline Statistics’. • HYPACK 2012 will show the Average Z-value and the Std. Dev. for the Z-value.

  9. Lock ChamberSingle Beam • Comments: • You have tested your accuracy at a single depth. • The acoustic properties over the concrete bottom of the lock chamber will be different than the same properties over a hard or soft bottom. • You have demonstrated how accurately you can determine the bottom of the lock chamber with your acoustic system. • I could argue you have not demonstrated the accuracy of your system over different types of bottom conditions.

  10. Lock ChamberMultibeam: Bar Check Routine Lower a bar to a fixed depth, or use a lock chamber. Then run MBMAX • Setup • +/- Depth Gate: Soundings outside bar depth +/- the gate are ignored. • +/- Angle Limit: Soundings with beam angle outside limits are ignored. • Running The Test • Run Bar Check from the Tools menu. • Click “Reset Barcheck.txt” to clear the report. • Lower the bar and enter Bar Depth. • When Measured Depth stabilizes, click “Save Depth”. • Repeat for each bar depth. • When done, click “Barcheck.txt” to view or print the report. Bar Check averages depths for three seconds then saves and graphs the result.

  11. BarCheck.TXT Sample

  12. Lock ChamberMultibeam: Bar Check • Comments: • If you use a bar check, it’s damn hard to hit the bar with enough beams at a typical survey depth. • Lower the bar to 10’, we get hits on beams +/- 60˚, but we’re going to be surveying at 30’. • Lower the bar to 30’, we get occasional hits on beams +/- 20˚. • In a lock chamber: • You may have to turn 90 degrees to keep the outside beams from hitting the walls of the chamber. Now you are assuming the lock chamber is level in that direction….. • For both: • The acoustic conditions aren’t the same as the ‘real world’.

  13. CROSS CHECK STATISTICSSINGLE BEAM • Edit your data in SBMAX • Run the CROSS CHECK STATISTICS program (Utilities Menu). • Program Computes: • Arithmetric Mean: (Average Difference) • Difference Mean (Average of Abs. Value of Differences) • Std. Deviation • Minimum Diff. • Maximum Diff.

  14. CROSS CHECK STATISTICSSingle Beam Arithmetric Mean = Average of Difference Values. Difference Mean = Average of the Absolute Values of Difference Values 95% Confidence = 1.96 * St. Dev. (aw heck, just multiply by 2) Arithmetric Mean is close to 0’. Difference Mean is close to 0.1’. You are one tough surveyor! Arithmetric Mean is close to 0. Difference Mean is close to 2.0’. You probably need a heave-pitch-roll sensor. Arithmetric Mean is close to 0.5’ Difference Mean is close to 2.0’. Something is wrong.

  15. CROSS CHECK STATISTICS • Comments: • You should always run a couple cross check lines across your survey lines to allow you run CROSS CHECK STATISTICS. • This is not a measure of accuracy. It is a measure of repeatability. • How confident are you that you can go out and get the same values time and time again. • You may find different results due to different weather conditions. • Flat a pancake….. • Light chop….. • We probably shouldn’t be surveying today….. • Can it show what % of my intersections met each standard? • Not yet, but we’re going to implement it in the next month or two. • Stay away from side slopes.

  16. Total Propagated Uncertainty • Computes the ‘uncertainty’ for each measurement of the multibeam and single beam system. • Use as a planning tool or in real time. • Display the uncertainty versus survey requirements: • IHO Orders 1A, 1B & Special • USACE Hard & Soft • During data collection, certain estimates are replaced by actual values

  17. Sample Graph • 200 kHz Fansweep • Overall system (as defined only in this example) meets IHO Order 1 Positioning requirements out to 55° from nadir. • Depth Uncertainty = TVU • Position Uncertainty = THU

  18. General Parameters • Basic information concerning the multibeam system. • Frequency • Number of Beams • Pulse Length • Along/Across Track Beamwidth • Maximum Ping Rate • Estimated Depth of Bottom • Important when planning a survey. • Replaced by actual depth in real time. • Survey Requirement Selection: • IHO Order 1 and Special Order • USACE Hard and Soft Bottom

  19. Environment Tab • Contains items that describe the environment where the data will be collected. • Estimated Speed of Sound • Peak-to-Peak Swell • Maximum Depth that SV Measurement is Made. • Other Stuff

  20. Sensor Info Tab • Contains: • Offset information for each sensor (read from HYSWEEP HARDWARE, if possible) • Info on how accurately you measured each sensor position. • Estimated accuracy of each sensor.

  21. TPU Comments • There are a lot of parameters in the TPU EDITOR. Most of you have no clue on what to enter for 50% of them. • Surveyors just monkey with the parameters until they can justify using all their outside beams. • Some of the parameters cause BIG changes to the computed TVU and THU: • Pulse Length • Water Level Uncertainty • Spatial Tide Prediction • Draft, Squat and Loading Uncertainties • Fixed Heave Uncertainty • GPS RMS Error Estimate • IMHO: TPU is just one tool. I prefer the Performance Test which uses real world measurements to determine the repeatability of your system by beam angle.

  22. Performance TestingCreating a Test Area To Make a Reference Surface: • Pick a flat area. Small changes in position will not affect your overall repeatability. • Set up a grid pattern to survey with your multibeam vessel. • 5 * Depth x 5 * Depth for up to 65 degre systems. • 7 * Depth x 7 * Depth for 65+ degree systems. • Create a MTX file with 1’ (30cm) cells around the test area. • Do a velocity cast immediately before survey, then collect your data set. • In MBMAX: • Process it, using only beams out to 45 deg. • Going into Stage 3, use the MTX file you created. • Do a great job of editing. • Save out an XYZ file of average values for each cell where you have 3 or more samples.

  23. Performance TestingMultibeam • Run two lines through your test area. • Edit the data in MBMAX, using all of the beam data. • The better your editing job, the better your results. • However, you just can’t delete all the outside beam data, as you won’t have enough samples to compute statistics on them. • When finished editing in Phase 3, run the Beam Angle Test

  24. Beam Angle Test Graph • The ‘X’ markers show the bias (absolute value) between the Reference Surface and your check lines. • The ‘O’ markers show the 95% confidence for each beam. Typically pretty flat out to 45˚….. ….starts sloping up between 45˚ and 60˚ … ….starts zooming up beyond 60˚.

  25. Beam Angle Test: Details Tab • Details Tab is Result at Specific Angle • Histogram is the distribution. • Max Outlier = Max difference. • Mean Difference shows bias between reference and the check lines. • 95% confidence = 2 * Standard Deviation.

  26. BeamAngle.TXT Report

  27. Sample Beam Angle Test Results Which system do you have more confidence in?

  28. Performance TestSingle Beam • Survey two single beam lines over the Performance Test area you created with your multibeam system. • Edit the data in SBMAX and save the data to Edited ALL Format. • Take the Edited ALL files into EXPORT and save them as a single XYZ file. • Load the XYZ file into MBMAX. It will take you directly to Stage 3. • Use the same MTX file you used when you created the Reference Surface. • Select the Single Beam Test from the menu.

  29. Performance TestSingle Beam • Running the Test • Select Single Beam Test from the Tools menu in MBMAX. Open the Reference XYZ file. • Results • Once again using difference statistics = reference minus check. • Histogram shows the distribution and bias. • Max Outlier = Max Difference. • Mean Difference = Bias. This is the main point of the test! • Standard Deviation (Sigma): Spread of the difference distribution. • 95% Confidence = 2*Sigma. 95% of the differences fall within 2*Sigma.

  30. Performance Test • Comments: • Take a velocity cast immediately before you survey. • Run the test at low tide. • Chances are the velocity profile will remain constant during the test. • Your inability to accurately measure the tide will be factored out. • Run the test in the deepest part of the channel. • Stay away from vertical changes in your test area. • Sometimes, you get what you pay for…

  31. The End

More Related