New Version of rtdists on CRAN (v. 0.4-9): Accumulator Models for Response Time Distributions
This article is originally published at http://singmann.org
I have just submitted a new version of rtdists
to CRAN (v. 0.4-9). As I haven’t mentioned rtdists
on here yet, let me simply copy it’s description as a short introduction, a longer introduction follows below:
Provides response time distributions (density/PDF, distribution function/CDF, quantile function, and random generation): (a) Ratcliff diffusion model based on C code by Andreas and Jochen Voss and (b) linear ballistic accumulator (LBA) with different distributions underlying the drift rate.
Cognitive models of response time distributions are (usually) bivariate distributions that simultaneously account for choices and corresponding response latencies. The arguably most prominent of these models are the Ratcliff diffusion model and the linear ballistic accumulator (LBA) . The main assumption of both is the idea of an internal evidence accumulation process. As soon as the accumulated evidence reaches a specific threshold the corresponding response is invariably given. To predict errors, the evidence accumulation process in each model can reach the wrong threshold (because it is noisy or because of variability in its direction). The central parameters of both models are the quality of the evidence accumulation process (the drift rate) and the position of the threshold. The latter can be voluntarily set by the decision maker, for example to trade off speed and accuracy. Additionally, the models can account for an initial bias towards one response (via position of the start point) and non-decision processes. To account for differences between the distribution besides their differential weighting (e.g., fast or slow errors) the models allow trial-by-trial variability of most parameters.
The new version of rtdists
provides a completely new interface for the LBA and a considerably overhauled interface for the diffusion model. In addition the package now provides quantile functions for both models. In line with general R
designs for distribution functions, the density starts with d
(dLBA
& ddiffusion
), the distribution function with p
(pLBA
& pdiffusion
), the quantile function with q
(qLBA
& qdiffusion
), and the random generation with r
(rLBA
& rdiffusion
). All main functions are now fully vectorized across all parameters and also across response (i.e., boundary or accumulator).
As an example, I will show how to estimate both models for a single individual data set using trial wise maximum likelihood estimation (in contrast to the often used binned chi-square estimation). We will be using one (somewhat randomly picked) participant from the data set that comes as an example with rtdists
, speed_acc
. Thanks to EJ Wagenmakers for providing this data and allowing it to be published on CRAN. We first prepare the data and plot the response time distribution.
require(rtdists) require(lattice) # for plotting lattice.options(default.theme = standard.theme(color = FALSE)) lattice.options(default.args = list(as.table = TRUE)) # Exp. 1; Wagenmakers, Ratcliff, Gomez, & McKoon (2008, JML) data(speed_acc) # remove excluded trials: speed_acc <- droplevels(speed_acc[!speed_acc$censor,]) # create numeric response variable where 1 is an error and 2 a correct response: speed_acc$corr <- with(speed_acc, as.numeric(stim_cat == response))+1 # select data from participant 11, accuracy condition, non-word trials only p11 <- speed_acc[speed_acc$id == 11 & speed_acc$condition == "accuracy" & speed_acc$stim_cat == "nonword",] prop.table(table(p11$corr)) # 1 2 # 0.04166667 0.95833333 densityplot(~rt, p11, group = corr, auto.key=TRUE, plot.points=FALSE, weights = rep(1/nrow(p11), nrow(p11)), ylab = "Density")
The plot obviously does not show the true density of both response time distributions (which can also be inferred from the warning messages produced by the call to densityplot
) but rather the defective density in which only the sum of both integrals is one. This shows that there are indeed a lot more correct responses (around 96% of the data) and that the error RTs have quite a long tail.
To estimate the LBA for this data we simply need a wrapper function to which we can pass the RTs and responses and which will return the summed log-likelihood of all data points (actually the negative value of that because most optimizers minimize per default). This function and the data then only needs to be passed to our optimizer of choice (I like nlminb
). To make the model identifiable we fix the SD of the drift rate for error RTs to 1 (other choices would be possible). The model converges at a maximum likelihood estimate (MLE) of 211.42 with parameters that look reasonable (not that the boundary b
is parametrized as A
+ b
). One might wonder about the mean negative dirft rate for error RTs, but the default for the LBA is a normal truncated at zero so even though the mean is negative, it only produces positive drift rates (negative drift rates could produce unidentified RTs).
ll_lba <- function(pars, rt, response) { d <- dLBA(rt = rt, response = response, A = pars["A"], b = pars["A"]+pars["b"], t0 = pars["t0"], mean_v = pars[c("v1", "v2")], sd_v = c(1, pars["sv"]), silent=TRUE) if (any(d == 0)) return(1e6) else return(-sum(log(d))) } start <- c(runif(3, 0.5, 3), runif(2, 0, 0.2), runif(1)) names(start) <- c("A", "v1", "v2", "b", "t0", "sv") p11_norm <- nlminb(start, ll_lba, lower = c(0, -Inf, 0, 0, 0, 0), rt=p11$rt, response=p11$corr) p11_norm # $par # A v1 v2 b t0 sv # 0.1182951 -2.7409929 1.0449789 0.4513499 0.1243456 0.2609930 # # $objective # [1] -211.4202 # # $convergence # [1] 0 # # $iterations # [1] 57 # # $evaluations # function gradient # 76 395 # # $message # [1] "relative convergence (4)"
We also might want to fit the diffusion model to these data. For this we need a similar wrapper. However, as the diffusion model can fail for certain parameter combinations the safest way is to wrap the ddiffusion
call into a tryCatch
call. Note that the diffusion model is already identified as the diffusion constant is set to 1 internally. Note that obtaining that fit can take longer than for the LBA and might need a few different tries with different random starting values to reach the MLE which is at 207.55. The lower MLE indicates that the diffusion model provides a somewhat worse account for this data set, but the parameters look reasonable.
ll_diffusion <- function(pars, rt, boundary) { densities <- tryCatch(ddiffusion(rt, boundary=boundary, a=pars[1], v=pars[2], t0=pars[3], z=0.5, sz=pars[4], st0=pars[5], sv=pars[6]), error = function(e) 0) if (any(densities == 0)) return(1e6) return(-sum(log(densities))) } start <- c(runif(2, 0.5, 3), 0.1, runif(3, 0, 0.5)) names(start) <- c("a", "v", "t0", "sz", "st0", "sv") p11_fit <- nlminb(start, ll_diffusion, lower = 0, rt=p11$rt, boundary=p11$corr) p11_fit # $par # a v t0 sz st0 sv # 1.3206011 3.2727201 0.3385602 0.3499652 0.2017950 1.0551704 # # $objective # [1] -207.5487 # # $convergence # [1] 0 # # $iterations # [1] 31 # # $evaluations # function gradient # 50 214 # # $message # [1] "relative convergence (4)"
Finally, we might be interested to assess the fit of the models graphically in addition to simply comparing their MLEs (see also ). Specifically, we will produce a version of a quantile probability plot in which we plot for the .1, .3, .5, .7, and .9 quantile both the RTs and cumulative probabilities and compare the model predictions with those values from the data (see , pp. 162). For this we need both the CDFs and the quantile functions. The cumulative probabilities are simply the quantiles for each response, for example, the .1 quantile for the error RTs is .1 times the overall error rate (which is .04166667). Therefore, the first step in obtaining the model predictions is to obtain the predicted error rate by evaluating the CDF at infinity (or a high value). We use this obtained error rate then to get the actual quantiles for each response which are then used to obtain the corresponding predicted RTs using the quantile functions. Finally, we plot predictions and observed data separately for both models.
# quantiles: q <- c(0.1, 0.3, 0.5, 0.7, 0.9) ## observed data: (p11_q_c <- quantile(p11[p11$corr == 2, "rt"], probs = q)) # 10% 30% 50% 70% 90% # 0.4900 0.5557 0.6060 0.6773 0.8231 (p11_q_e <- quantile(p11[p11$corr == 1, "rt"], probs = q)) # 10% 30% 50% 70% 90% # 0.4908 0.5391 0.5905 0.6413 1.0653 ### LBA: # predicted error rate (pred_prop_correct_lba <- pLBA(Inf, 2, A = p11_norm$par["A"], b = p11_norm$par["A"]+p11_norm$par["b"], t0 = p11_norm$par["t0"], mean_v = c(p11_norm$par["v1"], p11_norm$par["v2"]), sd_v = c(1, p11_norm$par["sv"]))) # [1] 0.9581342 (pred_correct_lba <- qLBA(q*pred_prop_correct_lba, response = 2, A = p11_norm$par["A"], b = p11_norm$par["A"]+p11_norm$par["b"], t0 = p11_norm$par["t0"], mean_v = c(p11_norm$par["v1"], p11_norm$par["v2"]), sd_v = c(1, p11_norm$par["sv"]))) # [1] 0.4871709 0.5510265 0.6081855 0.6809797 0.8301290 (pred_error_lba <- qLBA(q*(1-pred_prop_correct_lba), response = 1, A = p11_norm$par["A"], b = p11_norm$par["A"]+p11_norm$par["b"], t0 = p11_norm$par["t0"], mean_v = c(p11_norm$par["v1"], p11_norm$par["v2"]), sd_v = c(1, p11_norm$par["sv"]))) # [1] 0.4684367 0.5529569 0.6273732 0.7233959 0.9314825 ### diffusion: # same result as when using Inf, but faster: (pred_prop_correct_diffusion <- do.call(pdiffusion, args = c(rt = 20, as.list(p11_fit$par), boundary = "upper"))) # [1] 0.938958 (pred_correct_diffusion <- qdiffusion(q*pred_prop_correct_diffusion, a=p11_fit$par["a"], v=p11_fit$par["v"], t0=p11_fit$par["t0"], sz=p11_fit$par["sz"], st0=p11_fit$par["st0"], sv=p11_fit$par["sv"], boundary = "upper")) # [1] 0.4963608 0.5737010 0.6361651 0.7148225 0.8817063 (pred_error_diffusion <- qdiffusion(q*(1-pred_prop_correct_diffusion), a=p11_fit$par["a"], v=p11_fit$par["v"], t0=p11_fit$par["t0"], sz=p11_fit$par["sz"], st0=p11_fit$par["st0"], sv=p11_fit$par["sv"], boundary = "lower")) # [1] 0.4483908 0.5226722 0.5828972 0.6671577 0.8833553 ### plot predictions par(mfrow=c(1,2), cex=1.2) plot(p11_q_c, q*prop.table(table(p11$corr))[2], pch = 2, ylim=c(0, 1), xlim = c(0.4, 1.3), ylab = "Cumulative Probability", xlab = "Response Time (sec)", main = "LBA") points(p11_q_e, q*prop.table(table(p11$corr))[1], pch = 2) lines(pred_correct_lba, q*pred_prop_correct_lba, type = "b") lines(pred_error_lba, q*(1-pred_prop_correct_lba), type = "b") legend("right", legend = c("data", "predictions"), pch = c(2, 1), lty = c(0, 1)) plot(p11_q_c, q*prop.table(table(p11$corr))[2], pch = 2, ylim=c(0, 1), xlim = c(0.4, 1.3), ylab = "Cumulative Probability", xlab = "Response Time (sec)", main = "Diffusion") points(p11_q_e, q*prop.table(table(p11$corr))[1], pch = 2) lines(pred_correct_diffusion, q*pred_prop_correct_diffusion, type = "b") lines(pred_error_diffusion, q*(1-pred_prop_correct_diffusion), type = "b")
The plot confirms the somewhat better fit for the LBA compared to the diffusion model for this data set; while the LBA provides a basically perfect fit for the correct RTs, the diffusion model is somewhat off, especially for the higher quantiles. However, both models have similar problems predicting the long tail for the error RTs.
Many thanks to my package coauthors, Andrew Heathcote, Scott Brown, and Matthew Gretton, for developing rtdists
with me. And also many thanks to Andreas and Jochen Voss for releasing their C code of the diffusion model under the GPL.
References
Thanks for visiting r-craft.org
This article is originally published at http://singmann.org
Please visit source website for post related comments.