170 likes | 264 Views
Lecture 7 review. Physical habitat variables are poor predictors of distributions, even of resting fish (trophic factors like predation risk are at least as important) Ontogenetic changes in habitat use are critical in assessing “essential fish habitat”
E N D
Lecture 7 review • Physical habitat variables are poor predictors of distributions, even of resting fish (trophic factors like predation risk are at least as important) • Ontogenetic changes in habitat use are critical in assessing “essential fish habitat” • Always use likelihood functions for parameter estimation (efficient, combine data, allow use of prior information) • Never use information theoretic criteria (AIC, BIC, etc) to compare alternative models for estimating policy parameters
Lecture 8: estimation of absolute abundance in fish populations • First ask whether you need to do it in the first place, given that successful management requires regulation of exploitation rate which is often possible to measure and control without ever knowing stock size • When you need to know stock size • Options for obtaining stock size estimates
When do you need to know stock size in the fist place? • When you are forced to manage by “output controls” (TAC, Quotas, ITQs) and must set those output levels, or manage by “proven production potential” PPP=F x (Proven stock) • When there is a legal requirement to do so (e.g. endangered species act) • When there is no other way to estimate exploitation rate except U=Catch/Stock • When you are providing advice on the potential size of a new fishery, for economic planning purposes
Options for estimating total stock size • Direct census (visual, acoustic, etc.) • Density expansion (time, area) • Change in index methods (depletion, ratio) • C/U methods (Gulland’s old trick) • Pcap methods using marked animals • Bt/Bo methods using stock assessment models that estimate Bo as leading parameter
Direct census (count them all) • Used mainly where stock is extremely concentrated, e.g. migrating salmon or herring spawning aggregations • Typically involves a “visibility” or proportion seen conversion factor (e.g. acoustic target count to numbers, egg count to spawner count using eggs/spawner) • Can most often be replaced with much cheaper density expansion method
Density expansion methods • Decompose estimate into two problems:Stock size=(number per area)x(total area) • In sampling theory, this means defining a way to estimate mean density, and carefully defining the “sampling universe” or “sampling frame” to which the densities are assumed to apply. • DO NOT begin design of a density expansion method by the usual biologist’s whining about how variable the sample densities are likely to be, and calculating sample sizes accordingly. • Really big failures in density expansion methods (and sampling designs in general) are usually due to lack of care in initial definition and careful description of the sampling universe. Habitat mapping and habitat association analysis are typically critical here.
Designing effective sampling programs • DEFINE THE UNIVERSE OF POSSIBLE SAMPLES, NOT EXPECTING INTELLIGENT ADVICE FROM STATISTICIANS (WHO KNOW LITTLE ABOUT THE REAL UNIVERSE) • STRATIFY the sampling universe (classify every possible sample unit), being careful to identify units that you know beforehand are almost certain to have zero abundance and those where your sampling gear won’t work properly (you still must estimate mean density for such units, or else be content with a minimum N est. • Consider your options for sample unit choice within each stratum: random vs systematic, use of spatial statistical methods to interpolate unsampled densities
The sampling universe • N is the sum over all units i of the abundance ni in each unit: N=Σni • N is also the mean ni times the number of sampling units in the universe Each little box is a sampling unit, has abundance ni
The sampling universe • To estimate N, remember that you must somehow assign an abundance ni to EVERY unit I, whether or not you could or did sample it • Your options include: • Assign mean of sampled ni to all unsampled units (assume your units are a random sample) • Sample units at regular spacing (grid) so as to uncover any spatial structure that may be present (take a systematic sample, whose mean will have lower variance than a random sample if and only if there is large-scale structure) • Assume structure in how the ni vary over space (or time), and try to estimate that structure (assign ni values to unsampled I) using spatial statistics models
What to do when the ni are estimated by fishers or biologists who don’t know how to design spatial sampling programs (catch rates, expanded by estimates of area swept by each unit of effort) • You still have to assign an ni to every sampling unit in the universe • It is foolish to assume that fishers have sampled the units at random (but that is what you are assuming when you use average cpue) • Filling in the missing ni is called the “folly and fantasy” problem. Options for doing it include • Spatial statistics methods (FishMap demo) • Backfilling for each i using data from other times
Change in index methods • Here you estimate N by examining how much change in an index yt=qN is changed by taking known removal(s) Ct • yt is usually either a relative abundance or a sex ratio • Multiple relative abundance yt and catch Ct observations lead to the “Leslie” and “Delury” depletion models
Leslie depletion model • State and observation dynamics • Closed population: Nt+1=Nt-Ct=No-Kt • Linear observation process: yt=qNt • Combining state and observation models gives yt=qNt=q(No-Kt)= qNo-qKt (linear) i.e. get q from slope of y vs K, No from intercept • Depletion estimates of No are typically: • Biased downward by about 50% due to change in q over time (q higher at first as you get the stupid ones) • Sensitive to closure assumption (immigration/emigration cause upward bias in No) • Usually used to estimate local ni in larger area studies • Note: “removal” “kill”, can use in combination with mark-recapture experiments to get cross-validation
C/u methods (exploitated populations only) • Get an estimate of total catch C, then assume C=uN and estimate N given u • Methods for estimating u include: • mark-recapture: mark M animals before C is taken, get the number r of these that are recovered during fishery; u=r/M (N.B. big M doesn’t help if tag loss and tag reporting rates are unknown !) • Catch curves: get Z from curve, natural mortality M from somewhere, u=Z-M • Swept area: measure effort E, area “a” swept by average unit of effort, total area A, then u=aE/A
Mark-recapture experiments • Mark M animals, recover n total animals of which r are marked ones • Pcap estimate is then r/M, and total population estimate is N=n/Pcap = nM/r, i.e. you assume that n is the proportion Pcap of total N • Critical rules for mark-recapture methods: • NEVER use same method for both marking and recapture (marking always changes behavior) • Try to insure same probability of capture and recapture for all individuals in N (spread marking and recapture effort out over population) • Watch out for tag loss/tag induced mortality especially with spagetti tags (use PIT or CWT when possible)
Open population mark-recapture experiments (Jolly-Seber models) • Mark Mi animals at several occasions i, assuming number alive will decrease as Mit=MiSt where St is survival rate to the tth recapture occasion. Recover rit animals from marking occasion i at each later t. • Estimate total marked animals at risk to capture at occasion i as TMi=Σi-1Mi, to give Pcapi estimate Σi-1rit/TMi. • Total population estimate Ni at occasion i is then just Ni=TNi/Pcapi, where TNi is total catch at i. • Estimate recruitment as Ri=Ni-SNi-1 or other more elaborate assumption
Integrated stock assessment models: depletion models with recruitment and mortality dynamics, multiple data types • Here is what you don’t want to happen: