220 likes | 351 Views
What Have We Learned from 30 Years of Global Environmental Assessments?. Bill Clark, Harvard University AMS Summer Policy Colloquium June 2004. The Problem…. > 700 international environmental treaties Most require periodic science assessments... Through complex processes engaging ‘000s
E N D
What Have We Learned from 30 Years of Global Environmental Assessments? Bill Clark, Harvard University AMS Summer Policy Colloquium June 2004
The Problem… • > 700 international environmental treaties • Most require periodic science assessments... • Through complex processes engaging ‘000s • What should we learn from this experience? • Why do most have little influence, some more? • Carnegie Commission, OECD MegaScience • Global Environmental Assessment Project
Global Environmental Assessment Projecthttp://environment.harvard.edu/gea • multi yr research and training program • international team of faculty, fellows • workshops for scholars, practitioners • research papers (>40), books (3 in press) • Global climate change and ENSO variability • Stratospheric ozone depletion • Transboundary tropospheric air pollution • Biological, chemical hazards, others
Finding: What’s an ‘Assessment’? • Usually a discrete product (eg. a report, tho some of most effective are report-less models or scenarios) • Intimately bound to social process that produces it. • Goal of linking knowledge and action … in public policy/decision contexts... • and doing so within an institutional framework of rules, norms, expectations (eg. FCCC, LRTAP). • Effective assessments are political power, thus born suspect in eyes of those they might influence.
A Conceptual Framework for thinking about Effective Assessments Ultimate Determinants Proximate Pathways Effectiveness Historical context Saliency Effectiveness Credibility User characteristics Legitimacy Assessment characteristics
What do Assessments Influence? Ultimate Determinants Proximate Pathways Effectiveness Historical context Saliency Effectiveness Credibility User characteristics Legitimacy Assessment characteristics
What do assessments influence? • Environmental pressures, states, impacts • IIASA RAINS for LRTAP SOx-II • Actors’ agendas, strategies or decisions • Ozone Trends Panel (DuPont) • R&D priorities, standards for monitoring • IPCC Special Report on Forest Sinks • Above all: Issue framing, terms of debate • WMO/UNEP Villach ’86 Climate assessment
What properties of assessments make them more or less influential? Ultimate Determinants Proximate Pathways Effectiveness Historical context Saliency Effectiveness Credibility User characteristics Legitimacy Assessment characteristics
What properties of assessments make them more influential? • Credibility (Is it technically believable?) • of technical arguments to relevant communities • US CIAP-Impacts vs. WMO ‘Blue Books’ • Saliency (Is it relevant to decision making?) • to changing needs of specific users, producers • US NAPAP vs. European RAINS • Legitimacy (Is it politically fair and respectful?) • or fairness of the process to stakeholders. • WRI GWP vs. German Enquete I
S,C, L… What’s going on? • Tight tradeoffs among S,C,L…due to potential power of findings for stakeholders • Consequences of maximizing any one… • Stakeholders treat assessments as “games” that they (strategically) choose… • to shun (if they think they can only lose), or • to play (for gain, while maintaining exit options), or • to bind themselves to (as a good gamble…). • Challenge: get multiple parties to play & stay
On what do S,C,L most depend? Ultimate Determinants Proximate Pathways Effectiveness Historical context Saliency Effectiveness Credibility User characteristics Legitimacy Assessment characteristics
On what do salience, credibility, legitimacy most depend? • Context of the assessment (cf. Kingdon) • issue characteristics, • linkage to other policy issues, • issue attention cycles …
Issue Attention to Global Environmental Risks (SLG, 2001)
On what do saliency, credibility, legitimacy most depend (cont.)? • Context of the assessment • issue characteristics, linkage, attention cycles • Characteristics of user, target audiences • concern, openness, capacity • Characteristics of the assessment process • focus for the rest of this talk…
Characteristics of the Assessment Process that determine S,C,L • Participation (co-production); • Treatment of scope (limit liability; nested scale); • Treatment of uncertainty (embracing outliers) • Institutionalization (boundary construction; embeddedness); • Provision for iteration, evaluation, learning
How do participation decisions influence effectiveness? • Dilemma: legitimacy vs saliency, credibility • identify, attract, retain relevant participants • “great expectations” vs great numbers • Finding: the importance of co-production • differentiate roles in the process (eg. scoping vs. fact-finding vs. policy advice) • match expectations to institutional capacity • value of national re-assessments (IPCC/NAS)
How does the treatment of scope influence effectiveness? • Scope defined…(integrated or partial) • Dilemma: saliency vs. credibility • Finding: virtues of strategic dis-integration • integrated assessments suffer from bounded rationality, vulnerability to deconstruction; • Cause/effect / impacts / policy options • Ozone vs IPCC (Trends Panel + TAPs) • Multi-scale assessments via loose coupling • US National Assessment of Climate Impacts
How does the treatment of uncertainty and dissent influence the effectiveness of assessments? • Dilemma: value vs credibility / legitimacy • Finding: need to embrace inconclusiveness • insight oriented vs decision oriented assessment • strategies for treating thresholds, outliers • strategies for using, not hiding dissent
How does the institutionalization of assessment influence effectiveness? • Dilemma: salience vs credibility • enhance communication btw science and policy • protect scientists, policy makers from contagion • Finding: importance of boundary orgs • knowledge/ action not static gulf to be bridged; rather a dynamic boundary to be negotiated; • importance of “boundary” organizations (NAS) • dual accountability… vulnerability (NOAA) • “embeddedness” (EPA)
Provision for iteration,evaluation, and social learning • There exists a huge variety of experiments in how to do good assessments…. • But the target is moving (changing political context, issue framing, knowledge) … • … and the institutional frameworks tend to be “sticky,” locked in early forms (IPCC); • We don’t learn because its hard to, but also because we don’t try (a few exceptions…).
Practical implications…. • Adjust design of scientific assessments dependent on case, context … • One size does not fit all (attn. IPCC, MEA…) • Virtues of multiple nested assessments, dis-integration • Reconceptualize assessment as process of co-production through which interactions of experts and users define, shape, validate a shared body of usable knowledge (science and politics central); • Build, nurture and protect boundary organizations with dual accountability, independence • Create forums for learning from assessment experience of other producers, users, issues.
For further information on… • Global Environmental Assessment Project • http://environment.harvard.edu/gea • The broader issue of the role of science in managing global environmental risks • http://www.ksg.harvard.edu/sl • An ongoing effort to better harness science and technology for sustainable development • http://sustainabilityscience.org