1 / 41

first a digression The POC Ranking the Methods

first a digression The POC Ranking the Methods. Jennie Watson-Lamprey October 29, 2007. A Digression. Top 20 MIDR Values. Top 20 MIDR Values. Not a lot of overlap. We don’t know going into the problem which ground motions are high and which are low. Top 20 MIDR Values.

Download Presentation

first a digression The POC Ranking the Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. first a digressionThe POC Ranking the Methods Jennie Watson-Lamprey October 29, 2007

  2. A Digression

  3. Top 20 MIDR Values

  4. Top 20 MIDR Values • Not a lot of overlap. • We don’t know going into the problem which ground motions are high and which are low.

  5. Top 20 MIDR Values • If we’re after the median EDP, we use a GMSM method to get rid of the outliers • If we’re after the distribution of EDP, we don’t want to throw away the outliers.

  6. Top 20 MIDR Values • The point isn’t to identify the outliers. • The point is to figure out which GMSM method does it for you.

  7. The POC

  8. Calculating the Point of Comparison • Time series from a bin of M, R • Should work for a median • Too much variability! • Time series from a bin of M, R corrected for the difference between the recorded event and the design event • Not enough records to push the structure into the nonlinear range -> not a good estimator of rare response values

  9. Calculating the Point of Comparison • Current Method: • Run scaled and unscaled time series through a structural model • Perform a regression on a response parameter using time series properties (spectral acceleration) • Use predictive equations to define the joint distribution of the time series properties • Integrate the regression over the joint distribution • This gives a distribution of a response parameter

  10. Method for Estimating Point of Comparison • A suite of records from Mw6.75-7.25, Rrup 0-20km events was developed. A total of 98 records were distributed to the group June 26th. • The suite is run through each model using scale factors of 1, 2, 4 and 8. • A model of the desired structural response parameter using properties of the input time series (e.g. Sa(T1), Sa(2T1), duration, etc.) is developed.

  11. Method for Estimating Point of Comparison • The EDP model is checked to ensure that there is no bias with scale factor. - This is only a test for the limited M,R range represented by the 98 selected recordings • Models for the record properties that affect response are developed using the full PEER database and correlations between properties. • Combining the models in step 5 gives the joint pdf of record properties • Using the joint pdf of record properties and the model for building response based on those record properties, the pdf of structural response for a M7, Rrup=10km earthquake is calculated.

  12. Building A - Results from 98

  13. Building A - Regression • Perform a regression to determine probability of collapse - Here it’s 0. • Perform a regression to determine median and variability of MIDR. • Regression is based on spectral values at a number of periods.

  14. Building A - Residuals

  15. Building A - Integration

  16. Building A - Integration

  17. Building A - Integration Really

  18. Building A - CDF

  19. Building A - POC

  20. POC Questions?

  21. Ranking the Methods

  22. Ranking of Methods • Focus on suites of 7-ground motions • 10 contributors provided 4 suites of 7 ground motions for buildings A, C & D. • Develop an estimated distribution of predictions • Use the estimated distribution and the goal of the analysis to develop a statistic for ranking

  23. GMSM Methods

  24. Compiling Results for Each Method and Building

  25. Compiling All Results by Method and Building

  26. Compiling All Results by Method and Building

  27. Consolidating Results for Each Method

  28. Distribution of Predictions

  29. Ranking Statistic • Goal: Accurate estimate of MIDR | M, R, Sa • Statistic: P ( -0.1 < X < 0.1)

  30. Accurate Estimate

  31. Accurate Estimate Ranking

  32. Ranking Statistic • Goal: Minimize under-estimation of MIDR | M, R, Sa • Statistic: P ( X > 0 )

  33. Minimize Under-Estimation

  34. Minimize Under-Estimation

  35. Ranking of Methods • Ranking is dependent on analysis goal.

  36. CMS Thoughts

  37. CMS Thoughts

  38. CMS Thoughts

More Related