E N D
2. 250 randomly selected watersheds
Watersheds must contain > 25 % federal ownership
Based on in-channel, upslope, and riparian attributes.
55 watersheds done (instead of 100)
Pilot tests of the monitoring plan were conducted in 2000 and 2001 in order to test sampling protocols and determine the funding level and crew structure needed to implement the monitoring plan. Implementation of the monitoring plan began in 2002, although at half of the necessary level of funding. As of fall 2003, 55 watersheds have been sampled. For the evaluation of the Forest Plan, status and trend data for 250 watersheds are available only for vegetation and roads. In-channel data are available for status determinations in 55 watersheds.250 randomly selected watersheds
Watersheds must contain > 25 % federal ownership
Based on in-channel, upslope, and riparian attributes.
55 watersheds done (instead of 100)
Pilot tests of the monitoring plan were conducted in 2000 and 2001 in order to test sampling protocols and determine the funding level and crew structure needed to implement the monitoring plan. Implementation of the monitoring plan began in 2002, although at half of the necessary level of funding. As of fall 2003, 55 watersheds have been sampled. For the evaluation of the Forest Plan, status and trend data for 250 watersheds are available only for vegetation and roads. In-channel data are available for status determinations in 55 watersheds.
3. AREMP Sample Design Slide should show current rate of sampling (?).
This is a mix of theoretical HUCs sampled combined with reality of what is being sampled within the HUC.
Objective: evaluate condition of watersheds across the domain of the NWFP.
AREMP samples 6th-field watersheds and characterize HUCs using riparian and upslope and in-channel. Use DSM
Used probabilistic sample to select HUCs and stream reaches to represent the channel network. Stream reaches are averaged to represent the stream condition. Whole point is to evaluate the distribution of HUC condition through time.
AREMP has not been able to meet our target of 50 HUCs a year because of funding constraints.
$1.2 million samples 25 HUCs (sample design calls for 50 HUCs) using current protocols and attributes.
AREMP recognizes that we need to get to 50 HUCs/year (or explain consequences of not doing so).
Slide should show current rate of sampling (?).
This is a mix of theoretical HUCs sampled combined with reality of what is being sampled within the HUC.
Objective: evaluate condition of watersheds across the domain of the NWFP.
AREMP samples 6th-field watersheds and characterize HUCs using riparian and upslope and in-channel. Use DSM
Used probabilistic sample to select HUCs and stream reaches to represent the channel network. Stream reaches are averaged to represent the stream condition. Whole point is to evaluate the distribution of HUC condition through time.
AREMP has not been able to meet our target of 50 HUCs a year because of funding constraints.
$1.2 million samples 25 HUCs (sample design calls for 50 HUCs) using current protocols and attributes.
AREMP recognizes that we need to get to 50 HUCs/year (or explain consequences of not doing so).
4. Field data collected Channel morphology
Bankfull width: depth,
sinuosity, gradient,
entrenchment ratio
Habitat characteristics
Wood and pool frequency,
residual pool depth, substrate
Biological characteristics
Fish, amphibians, benthic
invertebrates, periphyton
Upslope and riparian attributes are collected at the watershed level using GIS. In channel attributes are collected at the reach scale by field crews.Upslope and riparian attributes are collected at the watershed level using GIS. In channel attributes are collected at the reach scale by field crews.
5. Upslope and Riparian Vegetation Use vegetation layer developed by Interagency Vegetation Mapping Project in OR and WA and CalVeg in CA. Oregon done by end of 2008
Washington done by end of 2010Oregon done by end of 2008
Washington done by end of 2010
6. Upslope and Riparian Roads Measure road density and number of road – stream crossings. Roads layer is incomplete
1:24,000 stream layer is desired end result.Roads layer is incomplete
1:24,000 stream layer is desired end result.
7. Over 70 specialists participated in the development of 5 aquatic province decision support models.Over 70 specialists participated in the development of 5 aquatic province decision support models.
9. Evaluation Curves
Represented by a curve such as this one.
There are many different curve shapes. I have chosen this one because of it’s simplicity. This is referred to as a two point curve.
First click – The evaluation criteria bound the range of data values in which change in the evaluation score takes place.
What’s an evaluation score? Good question…
Second click - Data values that fall outside the evaluation criteria, in this case below the minimum value, are given an evaluation score of 1
Third click – Data values that fall between the evaluation criteria are assigned evaluation scores that range between 1 and -1. The actual evaluation score is dependent on where the data fall between the evaluation criteria.
Fourth click - Data values that fall outside the evaluation criteria, in this case above the maximum value, are given an evaluation score of -1
Next slide…
Represented by a curve such as this one.
There are many different curve shapes. I have chosen this one because of it’s simplicity. This is referred to as a two point curve.
First click – The evaluation criteria bound the range of data values in which change in the evaluation score takes place.
What’s an evaluation score? Good question…
Second click - Data values that fall outside the evaluation criteria, in this case below the minimum value, are given an evaluation score of 1
Third click – Data values that fall between the evaluation criteria are assigned evaluation scores that range between 1 and -1. The actual evaluation score is dependent on where the data fall between the evaluation criteria.
Fourth click - Data values that fall outside the evaluation criteria, in this case above the maximum value, are given an evaluation score of -1
Next slide…
10. Types & Sources of Evaluation Curves
11. Expert Workshops We used EMDS as an expert system, that is to record and systematize the knowledge of experts
b/c of the variablility of NWFP area, it divided it into 7 aquatic provinces
A separate model was constructed for each province
Over 70 specialists participated in the development of 7 aquatic province decision support models.
7 aquatic provinces
4 1st round workshops
6 follow-up wsWe used EMDS as an expert system, that is to record and systematize the knowledge of experts
b/c of the variablility of NWFP area, it divided it into 7 aquatic provinces
A separate model was constructed for each province
Over 70 specialists participated in the development of 7 aquatic province decision support models.
7 aquatic provinces
4 1st round workshops
6 follow-up ws
12. Aquatic Conservation Strategy Goal is to maintain or improve the condition of watersheds.
Does not describe the baseline distribution nor identify a “desired” distribution.
We infer that the distribution should move toward improved condition.
13. This is a unique case where we looked at “all” the 6th field watersheds on the Olympic NF and within the Olympic NP. The report will focus on 250 randomly choosen watersheds. We will eventually be able to access watershed condition – based on roads and vegetation - for all watersheds as data become available.This is a unique case where we looked at “all” the 6th field watersheds on the Olympic NF and within the Olympic NP. The report will focus on 250 randomly choosen watersheds. We will eventually be able to access watershed condition – based on roads and vegetation - for all watersheds as data become available.
15. Decision support models can be applied to any spatial or temporal scale.
Incomplete datasets are the norm and can be evaluated with decision support models.
The model structure influences the sensitivity of the model to change.
Decision support models can be applied to any spatial or temporal scale.
Incomplete datasets are the norm and can be evaluated with decision support models.
The model structure influences the sensitivity of the model to change.
16. Balancing sampling efforts with limited $$ We have some funding “sideboards”
1.0 million samples 25 HUCsWe have some funding “sideboards”
1.0 million samples 25 HUCs
17. How we’re responding to declining $$ Reduce field effort
the number of attributes being collected.
simplify equipment
Reduce staffing
Reduce sites (?)
More emphasis on processes
Share data
Regardless of sample design, we are also looking for efficiencies in our on-going programs. The lessons we learn will be applied to the AREMP-PIBO merger.Regardless of sample design, we are also looking for efficiencies in our on-going programs. The lessons we learn will be applied to the AREMP-PIBO merger.
18. What’s it take to share data?
Common protocols
Use a probabilistic sample design
Common GIS layers
3 basic principles to sharing data.3 basic principles to sharing data.
19. This is an example of sample sites identified by EPA for montoring by ODFW, OR DEQ, watershed councils, AREMP, tribes over the past decade.
More and more groups are starting to use a design developed by EPA that ensures a uniform random distribution of sample points
Single upfront sample location = encourage sharing of data (and common protocols)
Can show others the linkage between proposed and ongoing
Don’t have to keep designing sample designs.This is an example of sample sites identified by EPA for montoring by ODFW, OR DEQ, watershed councils, AREMP, tribes over the past decade.
More and more groups are starting to use a design developed by EPA that ensures a uniform random distribution of sample points
Single upfront sample location = encourage sharing of data (and common protocols)
Can show others the linkage between proposed and ongoing
Don’t have to keep designing sample designs.
20. Pacific Northwest Aquatic Monitoring Partnership protocol comparison Identify agencies shown on this slide
Initial assessment suggested that several attributes were measured using the same protocols.
Everyone has “improved” common protocols, so that now very few groups are the same.Identify agencies shown on this slide
Initial assessment suggested that several attributes were measured using the same protocols.
Everyone has “improved” common protocols, so that now very few groups are the same.
21. Pacific Northwest Aquatic Monitoring Partnership protocol comparison We will try to develop “cross walks” to allow use of data collected using “old protocols,” but it may be that some (or lots) of old data will become unusable.We will try to develop “cross walks” to allow use of data collected using “old protocols,” but it may be that some (or lots) of old data will become unusable.
22. Instead of arguing about what agency is best – decided to do a side-by-side protocol comparison test for a core set of in-channel attributes.
Will occur in John Day basin during 2005, 10 agencies/tribes are participating as well as the Rocky Mt Research Station (will do intensive surveys to determine “truth”).
Will result in a recommendation for what are the “best” protocols for determining status and trend.
Thanks to the funding agencies shown on the bottomInstead of arguing about what agency is best – decided to do a side-by-side protocol comparison test for a core set of in-channel attributes.
Will occur in John Day basin during 2005, 10 agencies/tribes are participating as well as the Rocky Mt Research Station (will do intensive surveys to determine “truth”).
Will result in a recommendation for what are the “best” protocols for determining status and trend.
Thanks to the funding agencies shown on the bottom
23. www.reo.gov/monitoring/ Program overview
Annual Reports
Draft 10-year assessment of watershed condition