420 likes | 544 Views
Analysis of RT distributions with R. Emil Ratko-Dehnert WS 2010/ 2011 Session 09 – 18.01.2011. Last time. Recap of contents so far (Chapter 1 + 2) Hierarchical Interference (Townsend‘s system) Functional forms of RVs Density function (TAFKA „distribution“)
E N D
Analysis of RT distributionswith R Emil Ratko-Dehnert WS 2010/ 2011 Session 09 – 18.01.2011
Last time... • Recap of contents so far (Chapter 1 + 2) • Hierarchical Interference (Townsend‘s system) • Functional forms of RVs • Density function (TAFKA „distribution“) • Cumulative distribution function • Quantiles • Kolmogorov-Smirnof test
II RT distributions in the field
Why analyze distribution? • Normality assumption almost always violated • Experimental manipulations might affect only parts of RT distribution • RT distributions can be used to constrainmodels e.g. of visual search (model fitting and testing)
RT distributions • Typically unimodal and positively skewed • Can be characterized by e.g. the following distributions Ex-Wald Ex-Gauss Gamma Weibull
Ex-Gauss Ex-Gauss distribution • Introduced by Burbeck and Luce (1982) • Is the convolution of a normal and an exponential distribution • Density: CDF of N(0,1)
Ex-Gauss Convolution • ... is a modified version of the two original functions • It is the integral of the product of the two functions after one is reversed and shifted:
Ex-Gauss Why popular? • Components of Ex-Gauss might correspond to different mental processes • Exponential Decision; Gaussian Residual perceptual and response-generating processes • It is known to fit RT distributions very well (particularly hard search tasks) • One can look at parameter dynamics and infer on trade-offs
Ex-Gauss Further reading • Overview: • Schwarz (2001) • Van Zandt (2002) • Palmer, et al. (2009) • Others: • McGill (1963) • Hohle (1965) • Ratcliff (1978, 1979) • Burbeck, Luce (1982) • Hockley (1984) • Luce (1986) • Spieler, et al. (1996) • McElree & Carrasco (1999) • Spieler, et al. (2000) • Wagenmakers, Brown (2007)
Ex-Wald Ex-Wald distribution • Is the convolution of an exponential and a Wald distribution • Represents decision and response components as a diffusion process (Schwarz, 2001)
Ex-Wald Ex-Wald density where
Ex-Wald Diffusion Process Information space Respond „A“ A Mean drift ν z time Boundary separation drift rate ~N(ν,η) Evidence B Respond „B“
Ex-Wald Qualitative Behaviour A2 smaller drift rate strict criterion larger drift rate A1 lax criterion 0 time Decision times for lax and strict criterion
Ex-Wald Why popular? • Parameters can be interpreted psychologically • Very successful in modelling RTs for a number of cognitive and perceptual tasks • Are neurally plausible • Neuronal firing behaves like a diffusion process • Observed via single cell recordings
Ex-Wald Further reading • Theoretical Papers: • Schwarz (2001, 2002) • Ratcliff (1978) • Heathcote (2004) • Palmer, et al. (2005) • Wolfe, et al. (2009) • Cognitive+perceptual tasks: • Palmer, Huk & Shadlen (2005) • Visual Search: • Reeves, Santhi & Decaro (2005) • Palmer, et al. (2009)
Gamma Gamma distribution • Series of exponential distributions • α = average scale of processes • β = reflects approximate number of processes
Gamma Why popular? • In fact, not too popular (publication-wise) • It has very decent fits, when assuming a model, that sees RT distributions as composed of three exponentially distributed processes (Initial feed-forward search response selection)
Gamma Further reading • Dolan, van der Maas, & Molenaar (2002): A framework for ML estimation of parameters of (mixtures of) common reaction time distri-butions given optional truncation or censoring (in Behavioral Research Methods, Instruments & Computers, 34(3), 304-323)
Weibull Weibull Distribution • Like a series of races (bounded by 0 and ∞) the weibull distribution renders an asymptotic description of their minima • Johnsons (1994) version as 3 parameters: α, γ, ξ • For γ = 1 exp. distr., for γ ~ 3.6 normal distr. • Hence γmust lie somewhere in between
Weibull Why popular? • Has been used in a variety of cognitive tasks • Excels in those, which can be modeled as a race among competing units (e.g. Memory search RTs) • Has decent functional fits
Weibull Further reading • Logan (1992) • Johnson, et al. (1994) • Dolan, et al. (2002) • Chechile (2003) • Rouder, Lu, Speckman, Sun & Jiang (2005) • Cousineau, Goodman & Shiffrin (2002) • Palmer, Horowitz, Torrabla, Wolfe (2009)
Comparing functional fits • Null hypothesis is fit of data with normal distribution (standard assumption for mean/var analysis) • All proposed distributions beat the gaussian, but not equally well 1) Ex-Gauss, 2) Ex-Wald, 3) Gamma, 4)Weibull • Also the first three have similar parameter trends • For further reading, see the simulation study by Palmer, Horowitz, Torrabla, Wolfe (2009)
Basic idea • In statistics, bootstrapping is a method to assign accuracy to sample estimates • This is done by resampling with replacements from the original dataset • By that one can estimate properties of an estimator (such as its variance) • It assumes IID data
Ex: Bootstrapping the sample mean • Original data: X = x1, x2, x3, ..., x10 • Sample mean: X = 1/10 * (x1 + x2 + x3 + ... + x10) • Resample data to obtain a bootstrap means: X1* = x2, x5, x10, x10, x2, x8, x3, x10, x6, x7 μ1* • Repeat this 100 times to get μ1*, ..., μ100* • Now one has an empirical bootstrap distribution of μ • From this one can derive e.g. the bootstrap CI of μ
Pro bootstrapping • It is ... • simple and easy to implement • straightforward to derive SE and CI for complex estimtors of complex parameters of the distribution (percentile points, odds ratio, correlation coefficients) • an appropriate way to control and check the stability of the results
Contra bootstrapping • Under some conditions it is asymptotically consistent, so it does not provide general finite-sample guarantees • It has a tendency to be overly optimistic (under-estimates real error) • Application not always possible because of IID restriction
Situations to use bootstrapping • When theoretical distribution of a statistic is compli-cated or unknown • When the sample size is insufficient for straight-forward statistical inference • When power calculations have to be performed and a small pilot sample is availible • How many samples are to be computed? As much as your hardware allows for...
Creating own functions new.fun <-function(arg1, arg2, arg3){ x <-exp(arg1) y <-sin(arg2) z <-mean(arg2, arg3) result <- x + y + z result } A <- new.fun(12, 0.4, -4) „inputs“ Algorithm of function „output“ Usage of new.fun