320 likes | 422 Views
High Throughput Testing-The NRC Vision, The Challenge of Elucidating Real Changes in Biological Systems, and the Reality of Low Throughput Environmental Health Decision-Making. Dale Hattis George Perkins Marsh Institute Clark University. Outline.
E N D
High Throughput Testing-The NRC Vision, The Challenge of Elucidating Real Changes in Biological Systems, and the Reality of Low Throughput Environmental Health Decision-Making Dale Hattis George Perkins Marsh Institute Clark University
Outline • The Older NRC Vision Based on High-Throughput Testing and Safety-Type Risk Analysis • One Goal of This Talk is to Illuminate Alternatives • An Alternative Vision for Toxicology Based on • Quantitative Modeling of Homeostatic Biological Systems, and Problems in Their Early Life Setup and Late Life Breakdown • Interactions of Toxicant Actions and “Normal Background” Chronic Pathological Processes • Variability/Uncertainty Analysis Transforming Current Risk Analyses Used for Standard Setting
The Older NRC Vision Based on High Throughput Testing Results • Decades-long period for future development. • Ensemble of high-throughput assays to represent a large number (100+) of toxicity pathways. • Well adapted to rapid screening of new chemicals and old chemicals with no prior testing. • Supports decisions on what concentrations of agents will sufficiently perturb specific pathways to be of concern. • Relate concentrations causing those perturbations to in vivo concentrations using physiologically-based pharmacokinetic modeling. • No quantitative assessment of health risks or benefits of exposure reductions for existing agents.
Traditional Toxicological Model Leading To General Expectations of Thresholds in Dose Response Relationships for Toxicants • Biological systems have layers on layers of homeostatic processes (think of a thermostat) • Any perturbation automatically gives rise to offsetting processes that, up to a point, keep the system functioning without long term damage. • After any perturbation that does not lead to serious effects, the system returns completely to the status quo before the perturbation.
Caveats--There might not be no-adverse-effect thresholds for: • Tasks that challenge the maximum capacities of the organism to perform (e.g., 100 yard dash; perhaps learning to read) • Circumstances where some pathological process has already used up all the reserve capacity of the organism to respond to an additional challenge without additional damage (e.g. infarction causes heart muscle cell death that may be marginally worsened by incremental exposure to carbon monoxide)
Other Caveats • The ground state of the system is not a stable equilibrium, but a series of cyclic changes on different time scales (e.g. cell cycle; diurnal, monthly). • Sometimes continuous vs pulsatile patterns of change carry important signaling information in biological systems (e.g. growth hormone signaling for the sex-dependent pattern of P450 expression) • Therefore there must be resonance systems that respond to cyclic fluctuations of the right period. • (Think of the timing needed to push a child on a swing) • Therefore the effective “dose” may need to be modified by the periodicity to model dose response relationships.
Potential Paradigm Change for Applications to Therapeutics • Historical Paradigm--”The Magic Bullet” • Find a molecule that will kill the nasty bacteria • Find a spot in the brain that, if electrically stimulated, will control Parkinson’s disease symptoms • New Paradigm--Understand and exploit natural resonances to enhance or damp system oscillations • Potential for pulsatile systems for drug release • Potential for sensor-based systems for drug release (e.g., smart pumps that release insulin in response to real time measurements of blood glucose) • Potential for timed or sensor-based electrical stimulation of target tissues (e.g., heart pacemakers)
Key Idea for Transforming Risk Assessment for Traditional Toxic Effects • Quantitatively characterize each uncertainty (including those currently represented by “uncertainty factors”) by reducing it to an observable variability among putatively analogous cases. • Human interindividual variability--kinetic and dynamic • Variation in sensitivity between humans and test species • Adjustment for short- vs. longer periods of dosing and observation • Adjustment for database deficiencies (e.g. missing repro/developmental studies) • This general approach is not without difficulty—need rules for making the analogies (defining the reference groups to derive uncertainty distributions for particular cases). • However it does provide a way forward for health scientists to learn to reason quantitatively from available evidence relevant to specific uncertainties.
The “Straw Man” Quantitative Probabilistic Framework for Traditional “Individual Threshold” Modes of Action • It is ultimately hopeless to try to fairly and accurately represent the compounding effects of multiple sources of uncertainty as a series of point estimate “uncertainty” factors. • Distributional treatments are possible in part by creating reference data sets to represent the prior experience in evaluating each type of uncertainty.
Interpretation of Dose Response Information for Quantal Effects in Terms of a Lognormal Distribution of Individual Threshold Doses
Analytical Approach for Putative Individual Threshold-Type Effects • Select Point of Departure (ED50), then define needed distributional adjustments: • LOAEL to ED50 or NOAEL to ED50 • Acute/chronic • Animal to human • Human variability, taking into account the organ/body system affected and the severity of the response • Incompleteness of the data base
Elements of the “Straw Man” Proposal--Tentatively it is suggested that the RfD be the lower (more restrictive) value of: • (A) The daily dose rate that is expected (with 95% confidence) to produce less than 1/100,000 excess incidence over background of a minimally adverse response in a standard general population of mixed ages and genders, or • (B) The daily dose rate that is expected (with 95% confidence) to produce less than a 1/1,000 excess incidence over background of a minimally adverse response in a definable sensitive subpopulation.
100 Continuous Quantal 10 Man" Criterion of <1E-5 Risk with 95% Confidence Factor Reduction Needed to Reach the "Straw 1 .1 100 1000 10000 Overall Uncertainty Factor Results of Application of the Straw Man Analysis to18 Randomly-Selected RfDs from IRIS
Recent Results from Application of the “Straw Man” Approach to “Value of Information Testing of the IPCS “Data-Derived Uncertainty Factor Formulae • Split of PD/PK variability should be closer to 5:2, rather than 3.1:3.1 as implied by the IPCS proposal (if one wishes the product to multiply out to the traditional 10-fold uncertainty factor assigned for interindividual variability • Approximately equal protectiveness would be achieved by substitution of the following values for the interindividual variability factor 10 for RfD’s with the following characteristics: Quantal Endpoint Continuous EndpointOverall UF = 100 17 43 Overall UF = 1000 7.4 19
More Details of Our Analysis are Available in: • Hattis, D., Baird, S., and Goble, R. “A Straw Man Proposal for a Quantitative Definition of the RfD,” Drug and Chemical Toxicology, 25: 403-436, (2002). • Hattis, D. and Lynch, M. K. “Empirically Observed Distributions of Pharmacokinetic and Pharmacodynamic Variability in Humans—Implications for the Derivation of Single Point Component Uncertainty Factors Providing Equivalent Protection as Existing RfDs.” In Toxicokinetics in Risk Assessment, J. C. Lipscomb and E. V. Ohanian, eds., Informa Healthcare USA, Inc., 2007, pp. 69-93. • Detailed Data Bases and Distributional Analysis Spreadsheets:http://www2.clarku/edu/faculty/dhattis
Implications for Information Inputs to Risk Management Decision-Making • Increasingly cases such as airborne particles, ozone, and leadare forcing the recognition that even for non-cancer effects, some finite rates of adverse effects will remain after implementation of reasonably feasible control measures. • Societal reverence for life and health means “doing the very best we can” with available resources to reduce these effects. • This means that responsible social decision-making requires estimates of how many people are likely to get how much risk (for effects of specific degrees of severity) with what degree of confidence—in cases where highly resource-intensive protective measures are among the policy options. • The traditional multiple-single-point uncertainty factor system cannot yield estimates of health protection benefits that can be juxtaposed with the costs of health protection measures.
Alternative Vision#2--Multiple Directions for Improvement for Toxicology and Risk Assessment • New Pharmacodynamic Taxonomies and Approaches to Quantification • Taxonomy based on what the agent is doing to the organism • Taxonomy based on what the organism is trying to accomplish and how agents can help screw it up • Quantitative Probabilistic Framework for Traditional “Individual Threshold” Modes of Action
Taxonomy Based on What Organisms Need to Accomplish to Develop and Maintain Functioning, and What Can Go Wrong • Establishment and Maintenance of Homeostatic Systems at Different Scales of Distance, Time, Biological Functions, Involving • Sensors of Key Parameters to Be Controlled • Criteria (E.g. “Set Points”) for Evaluating Desirability of Current State • Effector Systems That Act to Restore Desirable State With Graded Responses to Departures Detected by the Sensors • Some Examples of Perturbations: • Hormonal “Imprinting” by Early-Life Exposure to Hormone Agonists • The “Tax” Theory of General Toxicant Influences on Fetal Growth, and Possible Consequences
Key Challenges for Biology in the 21st Century • How exactly are the set points set? • How does the system determine how, and how vigorously to respond to various degrees of departure from specific set points? • Could all this possibly be directly coded in the genome? • Or, more interestingly, does the genome somehow bring about a learning procedure where, during development, the system “learns” what values of set points and modes/degrees of response work best using some set of internal scoring system? • How exactly are the set points, etc., adjusted to meet the challenges of different circumstances (“allostasis” states--see Shulkin 2003, “Rethinking Homeostasis”).
Taxonomy Built from the Fundamental Ways that Chemicals Act to Perturb Biological Systems • For preclinical stages or subclinical levels of effect, is the basic action reversible or irreversible, given a subsequent period of no exposure? • Reversible (enzyme inhibition; receptor activation or inactivation--Traditional Acute or Chronic Toxicity--traditional toxicology/homeostatic system overwhelming framework (individual thresholds for response) • Irreversible (DNA mutation; destruction of alveolar septa; destruction of most neurons)
Subcategories for Nontraditional (Based on Irreversible Changes) Modes of Action • How many irreversible steps are needed to produce clinically recognizable disease? • (few--up to a dozen or so) molecular biological diseases--mutations, cancer via mutagenic mechanisms • (many--generally thousands+) chronic cumulative diseases (emphysema and other chronic lung diseases cause by cumulative loss of lung structures or scarring, Parkinson’s and other chronic neurological diseases produced by cumulative losses of neurons)
Special Features of Chronic Cumulative Disease Processes • Clinical consequences depend on the number of irreversible steps that have occurred in different people (often little detectable change until a large number of steps have occurred). • Effects occur as shifts in population distributions of function. • Thresholds for the causation of individual damage steps must be low enough that the disease progresses with time in the absence of exposures that cause acute symptoms. • Different kinds of biomarkers needed for • Accumulated amount of damage/dysfunction (e.g. FEV1) • (most powerful for epidemiology based on associations with short term measurements of exposure) Today’s addition to the cumulative store of damage (e.g. • excretion of breakdown products for lung structural proteins; • blood or urine levels of tissue-specific proteins usually found only inside specific types of cells such as heart-specific creatinine kinase for measurement of heart cell loss due to infarctions)
Toward Risk Assessment Models for Chronic Cumulative Pathological Processes • Describe the fundamental mechanism(s) that causes the accumulation of the individual damage events (especially the quantitative significance of various contributory factors). Key aid--biomarkers of the daily progress of damage (e.g. key enzyme released from a dying neuron of the specific type involved in the disease) • Quantitatively elucidate the ways in which specific environmental agents enhance the production of or prevent the repair of individual damage events • Describe the relationships between the numbers, types, and physical distribution of individual damage events and the loss of biological function or clinical illness. Key aid--biomarkers for the accumulation of past damage, e.g. FEV1.
Motivation to Move On • Younger generation of analysts will ultimately not tolerate older procedures that fail to provide a coherent way to use distributional information that is clearly relevant to the factual and policy issues. • Younger generation of analysts will have greater mathematical and computational facility, particularly as biology becomes quantitative “systems biology” with quantitative feedback modeling--increasingly an applied engineering-like discipline. • Legal processes will ultimately demand use of the “best science” as this becomes recognized in the technical community. • Newer information/communication tools will foster increasing habits and demands for democratic accountability; experts worldwide will increasingly be required to expose the bases of their policy recommendations—leaving less room for behind-the-scenes exercise of “old boy” safety factor judgments.
Contributions of High-Throughput Testing in Different Decision Contexts • Preliminary evaluation of large numbers of new, and old but untested, environmental agents--good promise to be helpful. • Evaluation of contaminated sites (e.g. superfund)--support for decision-making based on complex mixtures is highly dubious. • Assistance to epidemiological research to locate contributors for human disease (e.g. asthma)--also doubtful. • Assessment of relative risks of different industrial processes--fanciful because of the need to guess the relative weights to be assigned to numerous dissimilar findings on short term tests without established quantitative connections to adverse health effects. • High profile choices of degrees of control/exposure reduction to be mandated for specific agents (“slow throughput” decision-making)--Likely to muddy the waters by raising difficult mode of action questions that can only be resolved by expensive and slow in vivo testing--e.g. using “knockout” mice. This is in fact the most likely near term contribution.