410 likes | 599 Views
Decision-Theoretic Views on Switching Between Superiority and Non-Inferiority Testing. . Peter Westfall Director, Center for Advanced Analytics and Business Intelligence Texas Tech University. Background.
E N D
Decision-Theoretic Views on Switching Between Superiority and Non-Inferiority Testing. Peter Westfall Director, Center for Advanced Analytics and Business Intelligence Texas Tech University
Background • MCP2002 Conference in Bethesda, MD, August 2002 J. Biopharm. Stat. special issue, to appear 2003. • Articles: • Ng,T.-H. “Issues of simultaneous tests for non-inferiority and superiority” • Comment by G. Pennello • Comment by W. Maurer • Rejoinder by T.-H. Ng
Ng’s Arguments • No problem with control of Type I errors in switching from N.I. to Sup. Tests • However, it seems “sloppy”: • Loss of power in replication when there are two options • It will allow “too many” drugs to be called “superior” that are not really superior.
Westfall interjects for the next few slides • Why does switching allow control of Type I errors? Three views: • Closed Testing • Partitioning Principle • Confidence Intervals
Closed Testing Method(s) • Form the closure of the family by including all intersection hypotheses. • Test every member of the closed family by a (suitable) a-level test. (Here, a refers to comparison-wise error rate). • A hypothesis can be rejected provided that • its corresponding test is significant at level a, and • every other hypothesis in the family that implies it is rejected by its a-level test. • Note: Closed testing is more powerful than (e.g.) Bonferroni.
Control of FWE with Closed Tests Suppose H0j1,..., H0jmall are true (unknown to you which ones). You can reject one or more of these only when you reject the intersection H0j1Ç... Ç H0jm Thus, P(reject at least one of H0j1,..., H0jm| H0j1,..., H0jmall are true) £ P(reject H0j1Ç... Ç H0jm| H0j1,..., H0jmall are true) = a
Closed Testing – Multiple Endpoints H0: d1=d2=d3=d4 =0 H0: d1=d2=d3 =0 H0: d1=d2=d4 =0 H0: d1=d3=d4 =0 H0: d2=d3=d4 =0 H0: d1=d4 =0 H0: d2=d4 =0 H0: d1=d2 =0 H0: d1=d3 =0 H0: d2=d3 =0 H0: d3=d4 =0 H0: d1=0 p = 0.0121 H0: d2=0 p = 0.0142 H0: d3=0 p = 0.1986 H0: d4=0 p = 0.0191 Where dj = mean difference, treatment -control, endpoint j.
Closed Testing – Superiority and Non-Inferiority H0: d £ -d0 (Null: Inf.; Alt: Non-Inf) Intersection of the two nulls H0: d £ -d0 (Null: Inf.; Alt: Non-Inf) H0: d £ 0 (Null: not sup.; Alt: sup.) Note: The intersection of the non-inferiority hypothesis and the superiority hypothesis is equal to the non-inferiority hypothesis
Why there is no penalty from the closed testing standpoint • Reject H0: d £ -d0 only if • H0: d £ -d0 is rejected, and • H0: d £ -d0 is rejected. (no additional penalty) • Reject H0: d £ 0only if • H0: d £ 0is rejected, and • H0: d £ -d0 is rejected. (no additional penalty) So both can be tested at 0.05; sequence is irrelevant.
Why there is no need for multiplicity adjustment: The Partitioning View • Partitioning principle: • Partition the parameter space into disjoint subsets of interest • Test each subset using an a-level test. • Since the parameter may lie in only one subset, no multiplicity adjustment is needed. • Benefits • Can (rarely) be more powerful than closure • Confidence set equivalence (invert the tests)
Partitioning Null Sets • H01: d £ -d0 • H02: -d0 <d £ 0 You may test both without multiplicity adjustment, since only one can be true. LFC for H01is d = -d0 ; the LFC for H02is d = 0. Exactly equivalent to closed testing.
Confidence Interval Viewpoint • Contruct a 1-a lower confidence bound on d, call it dL. • If dL > 0, conclude superiority. If dL > -d0, conclude non-inferiority. The testing and interval approaches are essentially equivalent, with possible minor differences where tests and intervals do not coincide (eg, binomial tests).
Back to NgNg’s Loss Function Approach • Ng does not disagree with the Type I error control. However, he is concerned from a decision-theoretic standpoint • So he compares the “Loss” when allowing testing of: • Only one, pre-defined hypothesis • Both hypotheses
Ng’s Example • Situation 1: Company tests only one hypothesis, based on their preliminary assessment. • Situation 2: Company tests both hypotheses, regardless of preliminary assessment,
Further Development of Ng • Out of the “next 2000” products, • 1000 are truly equally efficacious as A.C. • 1000 are truly superior to A.C. • Suppose further that the company either • Makes perfect preliminary assessments, or • Makes correct assessments 80% of the time
No Classification;Both Tests Performed Ng’s concern: “Too many” Type I errors.
Westfall’s generalization of Ng • Three – decision problem: • Superiority • Non-Inferiority • NS (“Inferiority”) • Usual “Test both” strategy: • Claim Sup if 1.96 < Z • Claim NonInf if 1.96 –d0 < Z < 1.96 • Claim NS if Z < 1.96 –d0
Further Development • Assume d0 = 3.24 (Þ 90% power to detect non-inf.). • True States of Nature • Inferiority: d < -3.24 • Non-Inf: -3.24 < d < 0 • Sup: 0 < d
Loss Function Claim Nature
Westfall’s Extension • Compare • Ng’s recommendation to “preclassify” drugs according to Non-Inf or Sup, and • The “test both” recommendation • Use % increase over minimum loss as a criteria. • The comparison will depend on prior and loss!
Probability of Selecting “NonInf” Test Probit function, anchors are P(NonInf| d=0) = ps; P(NonInf| d=3.24) = 1-ps. Ng suggests ps=.80.
Summary of Priors and Losses • d~ p´{I(d=0)} + (1-p)´N(d; md, s2) (3 parms) • P(Select NonInf | d) =F(a + bd), where a,b determined by ps (1 parm) (only for Ng) • Loss matrix (5 parms) • Total: 3 or 4 prior parameters and 5 loss parameters . Not too bad!!!
Baseline Model • d~ (.2)´{I(d=0)} + (.8)´N(d; 1, 42) • P(Select NonInf | d) =F(.84 - .52d) (ps=.8) • Loss matrix: (An attempt to quantify “Loss to patient population”) Claim Nature
Consequence of Baseline Model • Optimal decisions (standard decision theory; see eg Berger’s book): • Classify to NS when z < -1.47 • Classify to NonInf when -1.47 < z < 2.20 • Classify to Sup when 2.20 < z • Ordinary rule: Cutpoints are -1.28, 1.96
Loss Matrix – Select and test only the NonInf hypothesis Outcome Nature
Loss Matrix – Select and test only the Sup hypothesis Outcome Nature
Changing the Loss Function Multiply lower left by c; c>0 Claim Nature
Conclusions • The simultaneous testing procedure is generally more efficient (less loss) than Ng’s method, except: • When Type II errors are not costly • When a large % of products are equivalent • A sidelight: The optimal rule itself is worth considering: • Thresholds for Non-Inf are more liberal, which allows a more stringent definition of non-inferiority margin