230 likes | 463 Views
Statistical Tests of Returns to Scale Using DEA. Rajiv D. Banker Hsihui Chang Shih-Chi Chang. Introduction. Simar and Wilson (2002) conduct simulations to evaluate non-parametric tests of returns to scale in DEA Bootstrap-based tests Binomial tests DEA-based tests
E N D
Statistical Tests of Returns to Scale Using DEA Rajiv D. Banker Hsihui Chang Shih-Chi Chang
Introduction • Simar and Wilson (2002) conduct simulations to evaluate non-parametric tests of returns to scale in DEA • Bootstrap-based tests • Binomial tests • DEA-based tests • They claim that when the production technology exhibits CRS • Their own binomial tests perform quite poorly in almost every instance • DEA-based tests perform poorly • Bootstrap-based tests consistently perform best compared to the other tests
Objectives of Our Study • Use experimentally designed simulations to evaluate the relative performance of Bootstrap-based and DEA-based test statistics for returns to scale based on the occurrence of both Type I and Type II errors • Type I error occurs when one rejects the null hypothesis when it is true while Type II error occurs when one fails to reject the null hypothesis when the alternative hypothesis is true
Preview of Simulation Results • DEA-based tests perform much better than Bootstrap-based tests in terms of the occurrence of Type I errors and is comparable for Type II errors • Simulations reveal that Simar and Wilson (2002) (1) Misreport the performance of DEA-based tests, (2) Inflate the performance of their Bootstrap-based tests • The performance of Bootstrap-based tests is very sensitive to different decision rules employed by Simar and Wilson to evaluate the null hypothesis, contrary to their claim that they are equivalent • DEA-based tests have an advantage of using only a small fraction of CPU time required by Bootstrap-based tests
Additive Test Statistics Simar-Wilson use this statistic for comparison
Multiplicative Test Statistics This statistic is suggested when the efficiency is multiplicative as in y = θ•f (x)
Bootstrap-based Test Statistics Observe that these statistics are very similar to the multiplicative DEA-based test statistics except for the bootstrap
Experimental Design • Design elements: • production technology • sample size • range of input values • efficiency distribution • Composed production function which includes a mix of a shifted Cobb-Douglas production function that exhibits variable returns to scale, and a linear production function that characterizes constant returns to scale
Production Technology = 0, 0.25, 0.50, 0.75, and 1
Sample Sizen=40 and n=60 Input Range Three different ranges for input values which are generated uniformly over the intervals of • 5 and 15 (both incr. and decr. RTS) • 5 and 10 (only incr. RTS) • 10 and 15 (only decr. RTS)
Efficiency Distribution v is generated for each observation j{1,…N} from a standard normal distribution N(0,1)
Simulated Observations We use the Frontier Efficiency Analysis with R (FEAR) package (Wilson, 2008) to compute output oriented DEA inefficiency scores for both CCR and BCC models.
Bootstrap-based Test Procedures • Generate a random sample of size N from DEA inefficiency index • Compute adjusted output value • Re-estimate CCR and BCC models using adjusted output and the original input to obtain the bootstrap DEA inefficiency estimates • Compute test statistics T1 and T2
Bootstrap-based Test Procedures (5) Repeat steps (1)-(4) B =2000 times to provide a set of estimates Tib, i=1,2 and b=1,2,,B (6) Construct the empirical distributions of Bootstrap estimates Tib, i=1,2 (7) Use the empirical distributions of Tib to compute bias of bootstrap estimates and construct the bias-corrected distributions of Tib (8) Select the nominal size (i.e.,α=0.01, 0.02, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3)
The (5.6) Test Procedure • Use bootstrap estimates of Tib and the original estimates of Ti to determine the critical value Cα for each selected nominal size by following the procedures outlined in Eq. (5.1)-(5.6) in Simar and Wilson (2002, pp. 121-122) • Use the decision rule in Eq. (5.6) to evaluate the null hypothesis of constant returns to scale, i.e., reject H0 if the observed Ti is less than (1-Cα)
The (5.11) Test Procedure • Estimate the probability value p that bootstrapped Tib is less than the observed Ti based on procedures outlined in Eq. (5.7)-(5.11) in Simar and Wilson (2002, p122) • Use the decision rule to compare the probability value with the selected nominal size to evaluate the null hypothesis of constant returns to scale, i.e., reject H0 if p < α
Number of Trials and Bootstraps • 1,000 experiments (trials), each with 2,000 bootstraps for Bootstrap-based test procedures
CPU Time • For every 1,000 experiments, each Bootstrap-based test procedure using FEAR program took on average 8,048 seconds of CPU time • For every 1,000 experiments each DEA-based test procedure used on average only 3 seconds of CPU time • CPU time is for a Lenovo desktop PC equipped with Intel Core CPU E8400 @ 3.00GHZ and 2.00G RAM
Simulation Results • The performance of DEA-based statistics is e comparable to that of the Bootstrap-based statistics in terms of the occurrence of type II errors • DEA-based statistics outperform Bootstrap-based statistics in terms of the occurrence of type I errors when the null is true • The performance of Bootstrap-based statistics is much worse when the decision rule (5.6) is employed to evaluate the null hypothesis than when (5.11) is used.
Conclusion • Performance of DEA-based statistics proposed by Banker (1996) is comparable to that of Bootstrap-based statistics suggested by Simar and Wilson (2002) for occurrence of Type II errors and is, in fact, superior for Type I errors • DEA-based statistics have an advantage of using much less CPU time than the Bootstrap test procedures
Implications • There is no need to use the Bootstrap-based test procedures since they yield comparable results to DEA-based procedures • There is a need to focus on more research using direct DEA-based statistical tests