Computes penalty based on quadratic form
penalty.Rd
This function computes quadratic penalties of the form $$0.5 \sum_{i} \lambda_i b_i^T S_i b_i,$$ with smoothing parameters \(\lambda_i\), coefficient vectors \(b_i\), and fixed penalty matrices \(S_i\).
It is intended to be used inside the penalised negative log-likelihood function when fitting models with penalised splines or simple random effects via quasi restricted maximum likelihood (qREML) with the qreml
function.
For qreml
to work, the likelihood function needs to be compatible with the RTMB
R package to enable automatic differentiation.
Arguments
- re_coef
coefficient vector/ matrix or list of coefficient vectors/ matrices
Each list entry corresponds to a different smooth/ random effect with its own associated penalty matrix in
S
. When several smooths/ random effects of the same kind are present, it is convenient to pass them as a matrix, where each row corresponds to one smooth/ random effect. This way all rows can use the same penalty matrix.- S
fixed penalty matrix or list of penalty matrices matching the structure of
re_coef
and also the dimension of the individuals smooths/ random effects- lambda
penalty strength parameter vector that has a length corresponding to the total number of random effects/ spline coefficients in
re_coef
E.g. if
re_coef
contains one vector and one matrix with 4 rows, thenlambda
needs to be of length 5.
Value
returns the penalty value and reports to qreml
.
Details
Caution: The formatting of re_coef
needs to match the structure of the parameter list in your penalised negative log-likelihood function,
i.e. you cannot have two random effect vectors of different names (different list elements in the parameter list), combine them into a matrix inside your likelihood and pass the matrix to penalty
.
If these are seperate random effects, each with its own name, they need to be passed as a list to penalty
. Moreover, the ordering of re_coef
needs to match the character vector random
specified in qreml
.
See also
qreml
for the qREML algorithm
Examples
# Example with a single random effect
re = rep(0, 5)
S = diag(5)
lambda = 1
penalty(re, S, lambda)
#> [1] 0
# Example with two random effects,
# where one element contains two random effects of similar structure
re = list(matrix(0, 2, 5), rep(0, 4))
S = list(diag(5), diag(4))
lambda = c(1,1,2) # length = total number of random effects
penalty(re, S, lambda)
#> [1] 0
# Full model-fitting example
data = trex[1:1000,] # subset
# initial parameter list
par = list(logmu = log(c(0.3, 1)), # step mean
logsigma = log(c(0.2, 0.7)), # step sd
beta0 = c(-2,2), # state process intercept
betaspline = matrix(rep(0, 18), nrow = 2)) # state process spline coefs
# data object with initial penalty strength lambda
dat = list(step = data$step, # step length
tod = data$tod, # time of day covariate
N = 2, # number of states
lambda = rep(10,2)) # initial penalty strength
# building model matrices
modmat = make_matrices(~ s(tod, bs = "cp"),
data = data.frame(tod = 1:24),
knots = list(tod = c(0,24))) # wrapping points
dat$Z = modmat$Z # spline design matrix
dat$S = modmat$S # penalty matrix
# penalised negative log-likelihood function
pnll = function(par) {
getAll(par, dat) # makes everything contained available without $
Gamma = tpm_g(Z, cbind(beta0, betaspline), ad = TRUE) # transition probabilities
delta = stationary_p(Gamma, t = 1, ad = TRUE) # initial distribution
mu = exp(logmu) # step mean
sigma = exp(logsigma) # step sd
# calculating all state-dependent densities
allprobs = matrix(1, nrow = length(step), ncol = N)
ind = which(!is.na(step)) # only for non-NA obs.
for(j in 1:N) allprobs[ind,j] = dgamma2(step[ind],mu[j],sigma[j])
-forward_g(delta, Gamma[,,tod], allprobs, ad = TRUE) +
penalty(betaspline, S, lambda) # this does all the penalization work
}
# model fitting
mod = qreml(pnll, par, dat, random = "betaspline")
#> Creating AD function
#> Initializing with lambda: 10 10
#> outer 1 - lambda: 3.636 2.859
#> outer 2 - lambda: 1.691 1.652
#> outer 3 - lambda: 0.967 1.184
#> outer 4 - lambda: 0.671 0.919
#> outer 5 - lambda: 0.546 0.739
#> outer 6 - lambda: 0.493 0.603
#> outer 7 - lambda: 0.472 0.494
#> outer 8 - lambda: 0.464 0.406
#> outer 9 - lambda: 0.463 0.334
#> outer 10 - lambda: 0.464 0.276
#> outer 11 - lambda: 0.467 0.23
#> outer 12 - lambda: 0.471 0.194
#> outer 13 - lambda: 0.474 0.166
#> outer 14 - lambda: 0.477 0.146
#> outer 15 - lambda: 0.48 0.131
#> outer 16 - lambda: 0.483 0.12
#> outer 17 - lambda: 0.484 0.112
#> outer 18 - lambda: 0.486 0.106
#> outer 19 - lambda: 0.487 0.102
#> outer 20 - lambda: 0.488 0.099
#> outer 21 - lambda: 0.489 0.097
#> outer 22 - lambda: 0.489 0.095
#> outer 23 - lambda: 0.49 0.094
#> outer 24 - lambda: 0.49 0.093
#> outer 25 - lambda: 0.49 0.093
#> outer 26 - lambda: 0.49 0.092
#> outer 27 - lambda: 0.49 0.092
#> outer 28 - lambda: 0.49 0.092
#> outer 29 - lambda: 0.49 0.092
#> Converged
#> Final model fit with lambda: 0.49 0.092