130 likes | 260 Views
Adding detection codes to CSEP: An InSAR-entist’s tale. GLRS (Salton Trough). Otherwise known as: What the heck do I do with this??. Adding codes to CSEP. Goals: Have several operational detectors running by the end of SCEC III (AGU 2011 workshop) Steps: Lohman :
E N D
Adding detection codes to CSEP: An InSAR-entist’s tale GLRS (Salton Trough) Otherwise known as: What the heck do I do with this??
Adding codes to CSEP • Goals: • Have several operational detectors running by the end of SCEC III (AGU 2011 workshop) • Steps: • Lohman: • Provide a “simplistic” version of a detector that has the necessary parts, inputs, outputs • Make codes available to community and identify the key hurdles faced with out approach • Liukus: • Worked with Lohman on ensuring that the appropriate files and types of tests are available • Community: • Do we have any means of verification yet? Wait and see?
Basic steps • Get your code working with the provided input data (GPS, CMT solutions, etc) • GPS now in PBO format, as requested • Rowena’s scripts can download/sort into format that is the same as testing center • Identify transients • Time-delayed, retrospective detections are fine! • Detection format same as before • Location, timespan, spatial extent. Can interpret however you want but has to be correct format (csv) • Rewrite code so that some key paths, dates are in a single input file • Send to CSEP • They will install, change the input file to reflect their data structure, and test. • All programming/scripting languages discussed so far are fine with them.
Unexpected (by me) issues • Remove pop-up windows (figures, status bars) • Remember to reset flag when sending codes to Masha • Repeated testing • The center will repeatedly run your code using an earlier data set, and expects the code to produce the same results each time. • We have some say in what “same” means. • Issues with Matlab “now” command • Potential problems with codes that have supporting data that is updated each time. Might have to provide a copy of that, which is used for each retest.
Example algorithm and utility scripts • Will be made available at end of SCEC meeting http://collaborate.scec.org/transient/Main_Page • General use: • get_data_UNAVCO.pl • Downloads and untars PBO SNARF solution from UNAVCO • CSEP does same thing each day, only reruns algorithms if there is new data. • read_data_pbo.m • Extracts displacment time series, precision, etc • Okada, utm2ll, etc. scripts
Read in data Id possible coseismic offsets Cull sites Fit rates/EQ for full series Identify sites that diverge from long-term trend in “recent” time Produce results file My algorithm: get inputs/outputs right
Read in data Id possible coseismic offsets Cull sites Fit rates/EQ for full series Identify sites that diverge from long-term trend in “recent” time Produce results file My algorithm: get inputs/outputs right GLRS
Read in data Id possible coseismic offsets Cull sites Fit rates/EQ for full series Identify sites that diverge from long-term trend in “recent” time Produce results file My algorithm: get inputs/outputs right • Assumes 100 bar stress drop, CMT • Flag if forward model > 1/5 reported errors
Read in data Id possible coseismic offsets Cull sites Fit rates/EQ for full series Identify sites that diverge from long-term trend in “recent” time Produce results file <1 year of data Too little data in recent testing period (~2 weeks) Too few neighbors Predefined # within # km My algorithm: get inputs/outputs right
Read in data Id possible coseismic offsets Cull sites Fit rates/EQ for full series Identify sites that diverge from long-term trend in “recent” time Produce results file Fit for offset and 2 logarithmic decays Problems with close inter-EQ times Now: iterate over EQ Next: do all simultaneously Reduces detections at Sierra E. M sites, but does a bad job at Parkfield My algorithm: get inputs/outputs right
Read in data Id possible coseismic offsets Cull sites Fit rates/EQ for full series Identify sites that diverge from long-term trend in “recent” time Produce results file Identify “triggered” sites Only save ones that trigger at # of spatial neighbors My algorithm: get inputs/outputs right
Read in data Id possible coseismic offsets Cull sites Fit rates/EQ for full series Identify sites that diverge from long-term trend in “recent” time Produce results file My algorithm: get inputs/outputs right
Postseismic transients continue to be an issue Are of interest themselves 1: Crop out regions affected by postseismic deformation (bad!) 2: Model functional form of postseismic signal (one or more timescales) and remove before fitting 3: Forward model expected postseismic signal based on slip distribution/crustal constitutive laws Manpower/human input into detections Assessment? Conclusions