200 likes | 332 Views
A FEW THOUGHTS on MONITORING STUDIES for COLORADO RIVER in THE GRAND CANYON. N. Scott Urquhart STARMAP Program Director Department of Statistics Colorado State University. MY PLAN versus REALITY. I had given a talk to GCMRC in Flagstaff entitled “DESIGN and POWER
E N D
A FEW THOUGHTSonMONITORING STUDIESforCOLORADO RIVER in THE GRAND CANYON N. Scott Urquhart STARMAP Program Director Department of Statistics Colorado State University
MY PLAN versus REALITY • I had given a talk to GCMRC in Flagstaff entitled“DESIGN and POWER ofVEGETATION MONITORING STUDIESforTHE RIPARIAN ZONE NEAR THE COLORADO RIVER in THE GRAND CANYON”
MY PLAN versus REALITYcontinued • I agreed to give a management-oriented version of that technical talk here. • Interrupting my vacation! • That talk was about 85% done when late Tuesday night I received a message from Dennis Kubly encouraging me to address a VERY different set of questions. • This response cannot be a well prepared, given my lack of opportunity to prepare it. -- Sorry
QUESTIONS DENNIS ASKED(Abbreviated versions) (1) Compare roles of scientists and managers in adaptive management programs? (2) Prioritizing monitoring and research: compromises resulting from declining budget & conflicting views (3) Evaluating utility of gathered information what are the measures of worth to managers of information gathered, analyzed, and interpreted by scientists? (4) Does some information have higher value than other info? (5) Tradeoffs between sampling designs that allow extrapolation to the entire Grand Canyon and those that do not? When is each most appropriate? (6) Risk assessment: Potential effects of reduced sampling intensity and consequent lower levels of detection of resource change that are necessitated by funding or logistical limitations? (7) When should we worry about Type II errors more than Type I errors? How do they differ?
MONITORING IS NOT RESEARCH impacts on answers to questions 1 – 4 • Research concerns how & why things happen. • May need to be temporally intensive • Monitoring concerns “What has happened?” • Major differences: measures & temporal intensity • EX: Mike Kearsley’s vegetation index versus detailed stem counts • Adaptive management may require some of both • But managers need to look critically at the need for research. • Specifically: Its linkage to manageable actions
QUESTION 5: What are the tradeoffs between sampling designs that allow extrapolation to the entire Grand Canyon and those that do not? When is each most appropriate? • When you need to make a statement about an entire area, sample it. • Biological investigators know far less about where various resources reside than they think they do. All kinds of things turn up where they “aren’t suppose to be.” • Model development: Targeted site selection is appropriate – even necessary • Pick gradient of sites which will support estimation of model components.
QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? • Illustration: Trial of accused murderer. • Ho: Accused is innocent of murder • Ha: Accused is murderer
QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? • Illustration: Trial of accused murderer. • Ho: Accused is innocent of murder • Ha: Accused is murderer
QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? • Illustration: Trial of accused murderer. • Ho: Accused is innocent of murder • Ha: Accused is murderer
QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? • Illustration: Trial of accused murderer. • Ho: Accused is innocent of murder • Ha: Accused is murderer
QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? • Illustration: Trial of accused murderer. • Ho: Accused is innocent of murder • Ha: Accused is murderer
QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? • Illustration: Trial of accused murderer. • Ho: Accused is innocent of murder • Ha: Accused is murderer
QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? • Illustration: Trial of accused murderer. • Ho: Accused is innocent of murder • Ha: Accused is murderer
QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? • Illustration: Trial of accused murderer. • Ho: Accused is innocent of murder • Ha: Accused is murderer
QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? • Illustration: Trial of accused murderer. • Ho: Accused is innocent of murder • Ha: Accused is murderer
QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? • Illustration: Trial of accused murderer. • Ho: Accused is innocent of murder • Ha: Accused is murderer
QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? • Illustration: Trial of accused murderer. • Ho: Accused is innocent of murder • Ha: Accused is murderer
QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? • Illustration: Trial of accused murderer. • Type I Error = Convict an Innocent person • Type II Error = Free a murderer • Which is worse? Depends on your perspective • From the view of the accused • Type I Error: I get jailed when I shouldn’t be! • Type II Error: I beat the system! • From the view of society • Type I Error: regrettable consequence of protecting society • Type II: A travesty! Criminal is free to again assault society!
AN ALTERNATIVE TO FIXED LEVEL TESTING • Use the test statistic (like the t-value) to determine the marginal a at which the test would barely be rejected, then • Incorporate that result into humans making the decision. • FACT: Virtually every study has real limitations or failures of assumptions. • Most of the time investigators recognize these, and usually can determine how these failures would influence the outcome.
QUESTIONS DENNIS ASKED (1) What are the different roles and functions of scientists and managers in adaptive management programs? Where are they different and where do they merge? This distinction may seem trivial and intuitive, but is not based on our experience in working with program members. (2) What are some relevant criteria for prioritizing monitoring and research in the face of a declining budget and conflicting views on management objectives? (3) How is the utility of information gathered by research and monitoring to be evaluated, i.e. what are the measures of worth to managers of information gathered, analyzed, and interpreted by scientists? (4) Are managers correct if they attribute a higher value to information gathered during core monitoring than that gathered in research projects and experiments, i.e. is there an easy way to assess which is more or less important in any given situation? What is the relative value of the two approaches in assigning cause and effect relationships to changes in resource status? (5) What are the tradeoffs between sampling designs that allow extrapolation to the entire Grand Canyon and those that do not? When is each most appropriate? (6) How should managers assess the risk created by reduced sampling intensity and consequent lower levels of detection of resource change that are necessitated by funding or logistical limitations? (7) When should we worry about Type II errors more than Type I errors? How do they differ?