690 likes | 847 Views
On-Farm Demonstrations: Getting Started/Avoiding Pitfalls. John H. Grove Plant and Soil Sciences Dep. University of Kentucky. On-Farm Research. Got the tools? (grain cart with load cell, yield monitor, GPS) Got a question? What about that new stuff? Should I do that?
E N D
On-Farm Demonstrations:Getting Started/Avoiding Pitfalls John H. Grove Plant and Soil Sciences Dep. University of Kentucky
On-Farm Research • Got the tools? (grain cart with load cell, yield monitor, GPS) • Got a question? • What about that new stuff? • Should I do that? • How would it work on my land? • Important decisions need good information • On-farm research can help
Organization • Setting Up A Valid Comparison • Replicating A Valid Comparison • Repeating A Valid Comparison • Using Regression/Correlation
Time and Management • Starts at planting, ends at harvest • Avoid lost time, plan well • Question --> Objective • Trying to pick a winner? • Trying to figure out why? • Avoid confounding --> stay focused • Don’t mix things together (seeding rates and varieties)
Setting the Objective • Describe the comparison you want to make • Treatments • Size of field area needed • Materials to be used • Measurements to be made • Simple comparison – no replication • No replication, no randomization • Valid comparison is all that is required
A Valid Comparison • As much as possible, everything should be the same • Uniform in space (field, soil type, previous crop, tillage, etc.) • Uniform in time (same season, planting date, harvest date, etc.) • Uniform in management (variety, fertilization, seeding rate, pest control, etc.)
A Valid Comparison • Avoids systematic bias in layout of treatments • What is systematic bias – layout favors one treatment over another • Go up and down the slope, not along the contour
A Particularly Rough Example Predicting Topsoil Depth Depth = 12 + 2.3Z + 29Kp - 8.5Kc (Thompson, Pena-Yewtukhiw and Grove, 2002)
4 0 9 6 8 0 0 4 0 9 6 7 0 0 158.5 4 0 9 6 6 0 0 4 0 9 6 5 0 0 4 0 9 6 4 0 0 152.0 4 0 9 6 3 0 0 4 0 9 6 2 0 0 Year 2000 4 0 9 6 1 0 0 145.5 Year 2001 4 0 9 6 0 0 0 421300 421400 421500 421600 421700 421800 421900 422000 422100 422200 422300 Putting in the Treatments Elevation m.a.s.l.
A Valid Comparison • Avoids systematic bias in layout of treatments • What is systematic bias – layout favors one treatment over another • Go up and down the slope, not along the contour • Avoids confounding • Treatment difference not entirely due to the treatments
Confounding?! • Treatment A in Field 1 and Treatment B in Field 2 • Planting with different planters • Split planting with unequal unit behavior • Point rows resulting in unequal row length • Planting unequal areas at field edges or in compacted, trafficked areas • Unequal pest (weed, insect, disease) control
The Control Treatment • What you are already doing? • Not adding the product. • Not interesting? • Reveals value of alternative treatment. • Sometimes not needed • Compare one product against another • Depends on the “eye of the investigator”
Valid Comparison Limitations • Difference of 20 bu/A! • Same in other parts of the field, in other fields, on other farms, with other varieties, etc.? • “Not knowing” dealt with by replication • Repeating the comparison in other parts of the field, in other fields, on other farms, with other varieties will give you greater confidence in the results
A Valid Comparison • Excellent first step • Each year, hundreds of extension and industry demonstrations based on this approach • Not an “experiment” • Experiments provide info on consistency/variation in the response/difference
Each Comparison: • Gives one yield for each treatment • Gives one yield difference between any two treatments • A = your usual corn N rate; B = usual rate – 40 lb N/A • Difference (A-B) = 8 bu/A; • Do you know enough from this one comparison?
Have Enough Information? • Maybe yes, maybe no. • Consider your on-farm research objective • Product validation? Lots of other data? • Confidence/uncertainty in your data? • Consider the purpose and value of replication • Is 8 bu/A “real”, or just “noise”?
A Single Observation • One yield difference is part of a “population” of differences • Is your 8 bu/A difference near the population average? • Is your 8 bu/A difference at on edge of the population? • What if the “true mean difference” is +2 bu/A? Your difference is close to one edge of the range (-8 to +12 bu/A)?
Replication = More Information • Variation in yield differences • More confidence in the management decision to be made • Need more information on spatial variability in the yield difference.
The Issue of “Inference Space” • Is the comparison valid over the whole field, with several soil map units in it? • Can the difference be expected on other fields/farms that you manage? • Replication in the field – more info on other soils in the field • Replication in other fields – more info on other fields/farms • Where you replicate matters: gain info at large scale, lose info at small scale
Replication = Randomization • You must randomly allocate your treatments within each “valid comparison” (replicate). • Avoid systematic bias • A sad story when you don’t pay attention to the possibility of systematic bias
A Jackson Purchase Strip Trial • Five reps of 2 treatments – absence/presence of product XYZ • Corn planted on the contour • Treatments not randomized, but alternated ---
A Jackson Purchase Strip TrialSome Initial Observations • Yields declined as we went downslope – depth to fragipan dropped as we went downslope • XYZ generally looks good • We think we know something! • But, with further reflection:
A Jackson Purchase Strip TrialSome Lessons Learned • Randomization would have improved our confidence in any “results” • Should have planted strips perpendicular to change in depth to fragipan.
So, How Do You Randomize? • For each replicate: • If two treatments, flip a coin • If more than two treatments, draw coded tags from a hat • Can randomize across all reps, but most often done within each rep.
How Many Reps Do You Need? • It Depends: • Each comparison takes time and effort • What will you learn for the extra effort? • The “Law of Unintended Consequences” vs. The “Law of Diminishing Returns” • replication = info on consistency • replication = reduce the value of the “yield difference” at which you have a certain “level of confidence” that the difference in yield is “true” (statistically significant)
Statistical Significance • Statistical significance ≠ Practical importance of the observed difference. • Statistics “disciplines” the validity of our conclusions (our claims). • Number of replicates depends on your desired “level of confidence” – the chance that you might draw the wrong conclusion from the experiment. • Number of replicates depends on expected variation – more means more.
The Number of Replicates • Most on-farm experiments have at least 3 reps, and many have 4. • Even then, natural variability causes the pros, who often want a 90% level of confidence, to “require” a large numerical value to declare a difference “significant”.
Repeating It Again, And Again, And Again… • “I don’t think it would have worked that way in a better (worse) season”. • “It was not a good test this year because …” • So, should you repeat your on-farm experiment again next year? • Return to your research objective(s). • Consider the purpose and value of repetition
Yield difference due to tillage and time – corn fertilized at 0 lb N/acre/year (1970 to 2006)
Yield difference due to tillage and time – corn fertilized at 150 lb N/acre/year (1970 to 2006)
Wheat Corn Tillage for Wheat, but No-Tillage for All the Other Crops in the Rotation:
Do The Same Thing For A Long Time:Then Average It All Together
Continuous corn yield response to fertilizer N in two tillage systems (1992 to 2006 – last 15 years out of sod)
Repetition • Validating current management or product use – one year, every so often, might be enough. • Single year information to validate product claims often very adequate. • Reasonable conclusion: one year is enough
But … • I don’t like how my experiment “looked”. Could have been weedy, diseased, suffered from poor stands, etc. • The weather wasn’t right. The season was not right for a “fair test”. • Reasonable conclusion: one year is not enough • Need more information on temporal variability in the yield difference.
Repetition = Confounding? • Changes in fields, varieties, weed and pest control can “confound” the comparison of differences across seasons. • A simple interaction between one of the treatments and time can confound your interpretation.