Commit efa3fed1 authored by Dirk Eddelbuettel's avatar Dirk Eddelbuettel

Import Debian changes 1.8-11-1

mgcv (1.8-11-1) unstable; urgency=low

  * New upstream release
parents 73ecf595 3c859201
** denotes quite substantial/important changes
*** denotes really big changes
Currently deprecated and liable to be removed:
- bam(...,sparse=TRUE) [1.8-5]
- negbin() with search for `theta' - use 'nb' instead [1.8-0]
- single penalty tensor product smooths.
- p.type!=0 in summary.gam.
1.8-11
* bam(...,discrete=TRUE) can now handle matrix arguments to smooths (and hence
linear functional terms).
* bam(...,discrete=TRUE) bug fix in fixed sp handling.
* bam(...,discrete = TRUE) db.drho reparameterization fix, fixing nonsensical
edf2. Also bam edf2 limited to maximum of edf1.
* smoothCon rescaling of S changed to use efficient matrix norm in place of
relatively slow computation involving model matrix crossproduct.
* bam aic corrected for AR model if present.
* Added select=TRUE argument to 'bam'.
* Several discrete prediction fixes including improved thread safety.
* bam/gam name gcv.ubre field by "method".
* gam.side modified so that if a smooth has 'side.constrain==FALSE' it is
neither constrained, nor used in the computation of constraints for other
terms (the latter part being new). Very limited impact!
* No longer checks if SUPPORT_OPENMP defined in Rconfig.h, but only if _OPENMP
defined. No change in actual behaviour.
1.8-10
** 'multinom' family implemented for multinomial logistic regression.
......
Package: mgcv
Version: 1.8-10
Version: 1.8-11
Author: Simon Wood <simon.wood@r-project.org>
Maintainer: Simon Wood <simon.wood@r-project.org>
Title: Mixed GAM Computation Vehicle with GCV/AIC/REML Smoothness
......@@ -16,6 +16,6 @@ LazyLoad: yes
ByteCompile: yes
License: GPL (>= 2)
NeedsCompilation: yes
Packaged: 2015-12-12 07:33:02 UTC; sw283
Packaged: 2016-01-22 10:21:27 UTC; sw283
Repository: CRAN
Date/Publication: 2015-12-12 18:49:20
Date/Publication: 2016-01-24 00:07:54
89ff6c925835eb3c183eedb57555f388 *ChangeLog
3385fa53a0e69f0f4011ee75e4ce20d7 *DESCRIPTION
42c7248ef130aa0acc82c3fd89dc5d46 *ChangeLog
38d226d8a82c189fc41ea959d7550a17 *DESCRIPTION
eb723b61539feef013de476e68b5c50a *GPL-2
3e32a2a94cab5c89a107d3a0f48b02dd *NAMESPACE
431e65b975d2c7387c2a21e0468da503 *R/bam.r
3f79e768d946cfe7b3cdcf520e692d36 *R/bam.r
52ea7081e235df890d3e51198bcc3092 *R/coxph.r
81dedc3a4d8529ae61099024a96fdb93 *R/efam.r
5a542999fd494e3872c877c6e1dc2706 *R/fast-REML.r
e977376bf0123ebd107132bbd185f612 *R/fast-REML.r
57f8b12b97304227870a4e43a33b5d75 *R/gam.fit3.r
472d6e64f3f403aad4f6550f921a4bba *R/gam.fit4.r
e63276b78a4ff2566af2e651bfc6aa9c *R/gam.sim.r
1b620b840ca67c9759639bd6e53824e3 *R/gam.sim.r
5aa7c3e6ce301ae6fbd678a9b008e7c5 *R/gamlss.r
ff163969e9ad7a38857907bf38a39ec0 *R/gamm.r
3b0d5cac4a59ef1a8cb325b249699476 *R/jagam.r
5c89c29a77a936e270108f7dd54e5770 *R/mgcv.r
cd89c8e25acf1c3e382e24898cf5269e *R/misc.r
8e5975e278d2405747e21a67742bf436 *R/mgcv.r
c69a68d6d3d987a61979dd73c03efc3d *R/misc.r
66c24aed2f8fc9f6bce321794f8aff87 *R/mvam.r
76192b5b36d8828af6e50013823732b4 *R/plots.r
9c88e467009ca9932160efd8f43a1a87 *R/smooth.r
10bc146058121c709fa7dc69419e0a65 *R/smooth.r
1dde3ff2f6c3227d1a4765e41d0cf92b *R/soap.r
76cc875719bf0ef9eab45ea5bfeccda6 *R/sparse.r
e468195a83fab90da8e760c2c3884bd3 *data/columb.polys.rda
......@@ -39,8 +39,9 @@ d0fa291cbbcef359c61e426f6ba38fbb *man/Predict.matrix.soap.film.Rd
f12468625253dd3603907de233762fd6 *man/Rrank.Rd
f50d43782718aa1ea66e0077952afc93 *man/Tweedie.Rd
80f8763baa4987579e2aa56073a9e94e *man/anova.gam.Rd
6364208a8235b47435484e1e4ef45ac0 *man/bam.Rd
6870afb13301f2ea16c57783f557fd2b *man/bam.Rd
3b8ce471159dd843bf26ab7eb17a9f33 *man/bam.update.Rd
f4112e262b8280c024c80ff8fa02735f *man/bug.reports.mgcv.Rd
a2beb811b1093c5e82ef32d7de1f7d32 *man/cSplineDes.Rd
a72647229fd92763636941e61597995d *man/choose.k.Rd
c03748964ef606621418e428ae49b103 *man/columb.Rd
......@@ -78,7 +79,7 @@ f30b9dc971521416a167a6b13302f06b *man/gaulss.Rd
2f222eeeb3d7bc42f93869bf8c2af58a *man/influence.gam.Rd
39b9de9dbac7d9dc5c849e1a37def675 *man/initial.sp.Rd
2a37ae59a9f9f5a0a58af45947eca524 *man/interpret.gam.Rd
4a1db38cb03fda0331cd4087ea820573 *man/jagam.Rd
ddb9a2b533b62a35cdd4338e6adb48f9 *man/jagam.Rd
07d2c259b9edf164f42935170b4fccd0 *man/ldTweedie.Rd
58e73ac26b93dc9d28bb27c8699e12cf *man/linear.functional.terms.Rd
93035193b0faa32700e1421ce8c1e9f6 *man/logLik.gam.Rd
......@@ -87,7 +88,7 @@ b1c95a20afd6eb0825c00b46b8c3cbfa *man/ls.size.Rd
5169af4be5fccf9fa79b6de08e9ea035 *man/magic.post.proc.Rd
e5cb91b2fd8646476b8f1114228a33cf *man/mgcv-FAQ.Rd
ba14a20f6fa77f066bac7cdfe88b8fff *man/mgcv-package.Rd
f58c9c3919f6971e7f9f7b985a696a08 *man/mgcv-parallel.Rd
6db1ae82808e56bd44c04066e2ec09aa *man/mgcv-parallel.Rd
00ccf213c31910cd14f1df65a300eb33 *man/model.matrix.gam.Rd
2f2fdc722c5e9e58664da9378451cd4a *man/mono.con.Rd
d33914a328f645af13f5a42914ca0f35 *man/mroot.Rd
......@@ -161,16 +162,16 @@ dc1ef92ff4454734c3a24876e299b760 *po/ko.po
dfd4eec9edc7d1ab6354d47b6e2bd42f *po/pl.po
03972284b3400cf82cacd5d2dc4b8cb3 *src/Makevars
b0459e16b04844271cf5e6b53aca0e47 *src/coxph.c
3175c54f752b686e77c1d0d9db3da2f2 *src/discrete.c
91c7e18bb76056ed3d89541aae8ff561 *src/discrete.c
a6b9681fae3eeddce24b3151a9442f2a *src/gdi.c
2436f9b328e80370ce2203dbf1dd813c *src/general.h
af03cf853914d93a15e2aee7ecd70d6e *src/init.c
33d2b72915de9474ae236c0f3de7ca1f *src/init.c
7c1553b521f89ab43477245c44b9d006 *src/magic.c
6f19798779d82ff9f06aad323c5fd758 *src/mat.c
1c84e40793603bd3b0e3eeb6161505ca *src/mat.c
25921d60399f6aad99a4cd761fd72628 *src/matrix.c
6b781cbd5b9cfee68ad30bb7ce31ef3a *src/matrix.h
cba80885f5885bd48227696a239c53bb *src/mgcv.c
200aa22ca9bc0140e245c299e197054a *src/mgcv.h
a7225f1fae5be15131c23fa20b0730ce *src/mgcv.h
b679e9063bc032f364c3ec24d57ddb08 *src/misc.c
465b8790ca2dfb6e8c5635cacabf5460 *src/mvn.c
8f480dc455f9ff011c3e8f059efec2c5 *src/qp.c
......
This diff is collapsed.
......@@ -651,7 +651,9 @@ Sl.iftChol <- function(Sl,XX,R,d,beta,piv) {
sum(db[,j]*Skb[[k]]))
}
}
list(bSb=sum(beta*Sb),bSb1=bSb1,bSb2=bSb2,d1b=db,rss1=rss1,rss2=rss2)
list(bSb=sum(beta*Sb),bSb1=bSb1,bSb2=bSb2,
d1b=db ## BUG - this needs transforming as coef - here, or where used
,rss1=rss1,rss2=rss2)
} ## end Sl.iftChol
......@@ -662,7 +664,7 @@ Sl.fitChol <- function(Sl,XX,f,rho,yy=0,L=NULL,rho0=0,log.phi=0,phi.fixed=TRUE,n
## yy = y'Wy in order to get derivsatives w.r.t. phi.
rho <- if (is.null(L)) rho + rho0 else L%*%rho + rho0
if (length(rho)<length(rho0)) rho <- rho0 ## ncol(L)==0 or length(rho)==0
## get log|S|_+ without stability transform...
fixed <- rep(FALSE,length(rho))
ldS <- ldetS(Sl,rho,fixed,np=ncol(XX),root=FALSE,repara=FALSE,nt=nt)
......@@ -708,7 +710,8 @@ Sl.fitChol <- function(Sl,XX,f,rho,yy=0,L=NULL,rho0=0,log.phi=0,phi.fixed=TRUE,n
rss.bSb <- yy - sum(beta*f) ## use identity ||y-Xb|| + b'Sb = y'y - b'X'y (b is minimizer)
reml1[n+1] <- (-rss.bSb/phi + nobs - Mp)/2
d <- c(-(dift$rss1[!fixed] + dift$bSb1[!fixed]),rss.bSb)/(2*phi)
reml2 <- rbind(cbind(reml2,d[1:n]),d)
reml2 <- rbind(cbind(reml2,d[1:n]),d)
if (!is.null(L)) L <- rbind(cbind(L,rep(0,nrow(L))),c(rep(0,ncol(L)),1))
}
if (!is.null(L)) {
......@@ -718,22 +721,23 @@ Sl.fitChol <- function(Sl,XX,f,rho,yy=0,L=NULL,rho0=0,log.phi=0,phi.fixed=TRUE,n
uconv.ind <- (abs(reml1) > tol)|(abs(diag(reml2))>tol)
hess <- reml2
grad <- reml1
if (sum(uconv.ind)!=ncol(reml2)) {
reml1 <- reml1[uconv.ind]
reml2 <- reml2[uconv.ind,uconv.ind]
}
er <- eigen(reml2,symmetric=TRUE)
er$values <- abs(er$values)
me <- max(er$values)*.Machine$double.eps^.5
er$values[er$values<me] <- me
step <- rep(0,length(uconv.ind))
step[uconv.ind] <- -er$vectors%*%((t(er$vectors)%*%reml1)/er$values)
if (length(grad)>0) {
if (sum(uconv.ind)!=ncol(reml2)) {
reml1 <- reml1[uconv.ind]
reml2 <- reml2[uconv.ind,uconv.ind]
}
## limit the step length...
ms <- max(abs(step))
if (ms>4) step <- 4*step/ms
er <- eigen(reml2,symmetric=TRUE)
er$values <- abs(er$values)
me <- max(er$values)*.Machine$double.eps^.5
er$values[er$values<me] <- me
step <- rep(0,length(uconv.ind))
step[uconv.ind] <- -er$vectors%*%((t(er$vectors)%*%reml1)/er$values)
## limit the step length...
ms <- max(abs(step))
if (ms>4) step <- 4*step/ms
} else step <- 0
## return the coefficient estimate, the reml grad and the Newton step...
list(beta=beta,grad=grad,step=step,db=dift$d1b,PP=PP,R=R,piv=piv,rank=r,
hess=hess,ldetS=ldS$ldetS,ldetXXS=ldetXXS)
......@@ -1045,7 +1049,6 @@ Sl.postproc <- function(Sl,fit,undrop,X0,cov=FALSE,scale = -1,L,nt=nt) {
}
Vp <- PP * scale ## cov matrix
## sp uncertainty correction...
## BUG: possibility of L ignored here.
if (!is.null(L)) d1b <- d1b%*%L
M <- ncol(d1b)
ev <- eigen(fit$outer.info$hess,symmetric=TRUE)
......
......@@ -57,8 +57,7 @@ gamSim <- function(eg=1,n=400,dist="normal",scale=2,verbose=TRUE) {
y <- f*x1 + e
return(data.frame(y=y,x1=x1,x2=x2,f=f))
} else if (eg==4) { ## factor `by' variable
if (verbose) cat("Factor `by' variable example\n")
n <- 400
if (verbose) cat("Factor `by' variable example\n")
x0 <- runif(n, 0, 1)
x1 <- runif(n, 0, 1)
x2 <- runif(n, 0, 1)
......@@ -67,7 +66,7 @@ gamSim <- function(eg=1,n=400,dist="normal",scale=2,verbose=TRUE) {
f3 <- 0.2 * x2^11 * (10 * (1 - x2))^6 +
10 * (10 * x2)^3 * (1 - x2)^10
e <- rnorm(n, 0, scale)
fac<-as.factor(c(rep(1,100),rep(2,100),rep(3,200)))
fac<-as.factor(sample(1:3,n,replace=TRUE))
fac.1<-as.numeric(fac==1);fac.2<-as.numeric(fac==2);
fac.3<-as.numeric(fac==3)
y<-f1*fac.1+f2*fac.2+f3*fac.3+ e
......
......@@ -437,7 +437,7 @@ gam.side <- function(sm,Xp,tol=.Machine$double.eps^.5,with.pen=FALSE)
sm.dim <- sm.id
for (d in 1:maxDim) {
for (i in 1:m) {
if (sm[[i]]$dim==d) for (j in 1:d) { ## work through terms
if (sm[[i]]$dim==d&&sm[[i]]$side.constrain) for (j in 1:d) { ## work through terms
term<-sm[[i]]$vn[j]
a <- sm.id[[term]]
la <- length(a)+1
......@@ -448,7 +448,7 @@ gam.side <- function(sm,Xp,tol=.Machine$double.eps^.5,with.pen=FALSE)
}
## so now each unique variable name has an associated array of
## the smooths of which it is an argument, arranged in ascending
## order of dimension.
## order of dimension. Smooths for which side.constrain==FALSE are excluded.
if (maxDim==1) warning("model has repeated 1-d smooths of same variable.")
## Now set things up to enable term specific model matrices to be
......@@ -2066,6 +2066,7 @@ gam <- function(formula,family=gaussian(),data=list(),weights=NULL,subset=NULL,n
object$call <- G$cl # needed for update() to work
class(object) <- c("gam","glm","lm")
if (is.null(object$deviance)) object$deviance <- sum(residuals(object,"deviance")^2)
names(object$gcv.ubre) <- method
environment(object$formula) <- environment(object$pred.formula) <-
environment(object$terms) <- environment(object$pterms) <- .GlobalEnv
if (!is.null(object$model)) environment(attr(object$model,"terms")) <- .GlobalEnv
......@@ -4157,8 +4158,9 @@ initial.spg <- function(x,y,weights,family,S,off,L=NULL,lsp0=NULL,type=1,
} else mustart <- mukeep
if (inherits(family,"extended.family")) {
theta <- family$getTheta()
w <- .5 * drop(family$Dd(y,mustart,theta,weights)$EDmu2*family$mu.eta(family$linkfun(mustart))^2)
} else w <- drop(weights*family$mu.eta(family$linkfun(mustart))^2/family$variance(mustart))
## use 'as.numeric' - 'drop' can leave result as 1D array...
w <- .5 * as.numeric(family$Dd(y,mustart,theta,weights)$EDmu2*family$mu.eta(family$linkfun(mustart))^2)
} else w <- as.numeric(weights*family$mu.eta(family$linkfun(mustart))^2/family$variance(mustart))
w <- sqrt(w)
if (type==1) { ## what PI would have used
lambda <- initial.sp(w*x,S,off)
......
......@@ -50,14 +50,14 @@ mvn.ll <- function(y,X,beta,dbeta=NULL) {
}
}
list(l=oo$ll,lb=oo$lb,lbb=matrix(oo$lbb,nb,nb),dH=dH)
}
} ## mvn.ll
## discretized covariate routines...
XWXd <- function(X,w,k,ts,dt,v,qc,nthreads=1,drop=NULL,ar.stop=-1,ar.row=-1,ar.w=-1) {
XWXd <- function(X,w,k,ks,ts,dt,v,qc,nthreads=1,drop=NULL,ar.stop=-1,ar.row=-1,ar.w=-1) {
## Form X'WX given weights in w and X in compressed form in list X.
## each element of X is a (marginal) model submatrix. Full version
## is given by X[[i]][k[,i],]. list X relates to length(ds) separate
## is given by X[[i]][k[,i],]. list X relates to length(ts) separate
## terms. ith term starts at matrix ts[i] and has dt[i] marginal matrices.
## Terms with several marginals are tensor products and may have
## constraints (if qc[i]>1), stored as a householder vector in v[[i]].
......@@ -68,45 +68,49 @@ XWXd <- function(X,w,k,ts,dt,v,qc,nthreads=1,drop=NULL,ar.stop=-1,ar.row=-1,ar.w
n <- length(w);pt <- 0;
for (i in 1:nt) pt <- pt + prod(p[ts[i]:(ts[i]+dt[i]-1)]) - as.numeric(qc[i]>0)
oo <- .C(C_XWXd,XWX =as.double(rep(0,pt^2)),X= as.double(unlist(X)),w=as.double(w),
k=as.integer(k-1),m=as.integer(m),p=as.integer(p), n=as.integer(n),
k=as.integer(k-1),ks=as.integer(ks-1),m=as.integer(m),p=as.integer(p), n=as.integer(n),
ns=as.integer(nx), ts=as.integer(ts-1), as.integer(dt), nt=as.integer(nt),
v = as.double(unlist(v)),qc=as.integer(qc),nthreads=as.integer(nthreads),
ar.stop=as.integer(ar.stop-1),ar.row=as.integer(ar.row-1),ar.weights=as.double(ar.w))
if (is.null(drop)) matrix(oo$XWX,pt,pt) else matrix(oo$XWX,pt,pt)[-drop,-drop]
} ## XWXd
XWyd <- function(X,w,y,k,ts,dt,v,qc,drop=NULL,ar.stop=-1,ar.row=-1,ar.w=-1) {
XWyd <- function(X,w,y,k,ks,ts,dt,v,qc,drop=NULL,ar.stop=-1,ar.row=-1,ar.w=-1) {
## X'Wy...
m <- unlist(lapply(X,nrow));p <- unlist(lapply(X,ncol))
nx <- length(X);nt <- length(ts)
n <- length(w);pt <- 0;
for (i in 1:nt) pt <- pt + prod(p[ts[i]:(ts[i]+dt[i]-1)]) - as.numeric(qc[i]>0)
oo <- .C(C_XWyd,XWy=rep(0,pt),y=as.double(y),X=as.double(unlist(X)),w=as.double(w),k=as.integer(k-1),
ks=as.integer(ks-1),
m=as.integer(m),p=as.integer(p),n=as.integer(n), nx=as.integer(nx), ts=as.integer(ts-1),
dt=as.integer(dt),nt=as.integer(nt),v=as.double(unlist(v)),qc=as.integer(qc),
ar.stop=as.integer(ar.stop-1),ar.row=as.integer(ar.row-1),ar.weights=as.double(ar.w))
if (is.null(drop)) oo$XWy else oo$XWy[-drop]
} ## XWyd
Xbd <- function(X,beta,k,ts,dt,v,qc,drop=NULL) {
Xbd <- function(X,beta,k,ks,ts,dt,v,qc,drop=NULL) {
## note that drop may contain the index of columns of X to drop before multiplying by beta.
## equivalently we can insert zero elements into beta in the appropriate places.
n <- if (is.matrix(k)) nrow(k) else length(k)
m <- unlist(lapply(X,nrow));p <- unlist(lapply(X,ncol))
nx <- length(X);nt <- length(ts)
n <- if (is.matrix(k)) nrow(k) else length(k) ## number of data
m <- unlist(lapply(X,nrow)) ## number of rows in each discrete model matrix
p <- unlist(lapply(X,ncol)) ## number of cols in each discrete model matrix
nx <- length(X) ## number of model matrices
nt <- length(ts) ## number of terms
if (!is.null(drop)) {
b <- if (is.matrix(beta)) matrix(0,nrow(beta)+length(drop),ncol(beta)) else rep(0,length(beta)+length(drop))
if (is.matrix(beta)) b[-drop,] <- beta else b[-drop] <- beta
beta <- b
}
bc <- if (is.matrix(beta)) ncol(beta) else 1
oo <- .C(C_Xbd,f=as.double(rep(0,n*bc)),beta=as.double(beta),X=as.double(unlist(X)),k=as.integer(k-1),
bc <- if (is.matrix(beta)) ncol(beta) else 1 ## number of columns in beta
oo <- .C(C_Xbd,f=as.double(rep(0,n*bc)),beta=as.double(beta),X=as.double(unlist(X)),k=as.integer(k-1),
ks = as.integer(ks-1),
m=as.integer(m),p=as.integer(p), n=as.integer(n), nx=as.integer(nx), ts=as.integer(ts-1),
as.integer(dt), as.integer(nt),as.double(unlist(v)),as.integer(qc),as.integer(bc))
if (is.matrix(beta)) matrix(oo$f,n,bc) else oo$f
} ## Xbd
diagXVXd <- function(X,V,k,ts,dt,v,qc,drop=NULL,n.threads=1) {
diagXVXd <- function(X,V,k,ks,ts,dt,v,qc,drop=NULL,n.threads=1) {
## discrete computation of diag(XVX')
n <- if (is.matrix(k)) nrow(k) else length(k)
m <- unlist(lapply(X,nrow));p <- unlist(lapply(X,ncol))
......@@ -118,6 +122,7 @@ diagXVXd <- function(X,V,k,ts,dt,v,qc,drop=NULL,n.threads=1) {
V <- V0;rm(V0)
} else pv <- ncol(V)
oo <- .C(C_diagXVXt,diag=as.double(rep(0,n)),V=as.double(V),X=as.double(unlist(X)),k=as.integer(k-1),
ks=as.integer(ks-1),
m=as.integer(m),p=as.integer(p), n=as.integer(n), nx=as.integer(nx), ts=as.integer(ts-1),
as.integer(dt), as.integer(nt),as.double(unlist(v)),as.integer(qc),as.integer(pv),as.integer(n.threads))
oo$diag
......
......@@ -1677,6 +1677,7 @@ smooth.construct.fs.smooth.spec <- function(object,data,knots) {
k <- 1
oterm <- object$term
## and strip it from the terms...
for (i in 1:object$dim) if (object$term[i]!=fterm) {
object$term[k] <- object$term[i]
k <- k + 1
......@@ -1747,25 +1748,31 @@ smooth.construct.fs.smooth.spec <- function(object,data,knots) {
object$rank <- c(object$rank*nf,rep(nf,null.d))
}
object$side.constrain <- FALSE ## don't apply side constraints - these are really random effects
object$null.space.dim <- 0
object$C <- matrix(0,0,ncol(object$X)) # null constraint matrix
object$plot.me <- TRUE
class(object) <- if ("tensor.smooth.spec"%in%spec.class) c("fs.interaction","tensor.smooth") else
"fs.interaction"
if ("tensor.smooth.spec"%in%spec.class) {
## give object margins like a tensor product smooth...
## need just enough for fitting and discrete prediction to work
object$margin <- list()
if (object$dim>1) stop("fs smooth not suitable for discretisation with more than one metric predictor")
form1 <- as.formula(paste("~",object$fterm,"-1"))
fac -> data[[fterm]]
object$margin[[1]] <- list(X=model.matrix(form1,data),term=object$fterm)
object$margin[[2]] <- list(X=rp$X,term=object$base$term)
object$margin[[1]] <- list(X=model.matrix(form1,data),term=object$fterm,form=form1,by="NA")
class(object$margin[[1]]) <- "random.effect"
object$margin[[2]] <- object
object$margin[[2]]$X <- rp$X
object$margin[[2]]$margin.only <- TRUE
## list(X=rp$X,term=object$base$term,base=object$base,margin.only=TRUE,P=object$P,by="NA")
## class(object$margin[[2]]) <- "fs.interaction"
## note --- no re-ordering at present - inefficiecnt as factor should really
## be last, but that means complete re-working of penalty structure.
} ## finished tensor like setup
object$side.constrain <- FALSE ## don't apply side constraints - these are really random effects
object$null.space.dim <- 0
object$C <- matrix(0,0,ncol(object$X)) # null constraint matrix
object$plot.me <- TRUE
#object$base.bs <- class(object) ## base smoother class
class(object) <- if ("tensor.smooth.spec"%in%spec.class) c("fs.interaction","tensor.smooth") else
"fs.interaction"
object
} ## end of smooth.construct.fs.smooth.spec
......@@ -1783,6 +1790,7 @@ Predict.matrix.fs.interaction <- function(object,data)
object$bs.dim <- object$base$bs.dim
object$term <- object$base$term
Xb <- Predict.matrix(object,data)%*%object$P
if (!is.null(object$margin.only)) return(Xb)
X <- matrix(0,nrow(Xb),0)
for (i in 1:length(object$flev)) {
X <- cbind(X,Xb * as.numeric(fac==object$flev[i]))
......@@ -2022,7 +2030,8 @@ smooth.construct.re.smooth.spec <- function(object,data,knots)
maxd <- maxi <- 0
for (i in 1:object$dim) {
form1 <- as.formula(paste("~",object$term[i],"-1"))
object$margin[[i]] <- list(X=model.matrix(form1,data),term=object$term[i])
object$margin[[i]] <- list(X=model.matrix(form1,data),term=object$term[i],form=form1,by="NA")
class(object$margin[[i]]) <- "random.effect"
d <- ncol(object$margin[[i]]$X)
if (d>maxd) {maxi <- i;maxd <- d}
}
......@@ -2052,7 +2061,7 @@ smooth.construct.re.smooth.spec <- function(object,data,knots)
object$side.constrain <- FALSE ## don't apply side constraints
object$plot.me <- TRUE ## "re" terms can be plotted by plot.gam
object$te.ok <- if (inherits(object,"tensor.smooth.spec")) 0 else 2 ## these terms are suitable as te marginals, but
## can not be plotted
## can not be plotted
object$random <- TRUE ## treat as a random effect for p-value comp.
......@@ -3176,9 +3185,9 @@ smoothCon <- function(object,data,knots=NULL,absorb.cons=FALSE,scale.penalty=TRU
sm$S.scale <- rep(1,length(sm$S))
if (scale.penalty && length(sm$S)>0 && is.null(sm$no.rescale)) # then the penalty coefficient matrix is rescaled
{ maXX <- mean(abs(t(sm$X)%*%sm$X)) # `size' of X'X
{ maXX <- norm(sm$X,type="I")^2 ##mean(abs(t(sm$X)%*%sm$X)) # `size' of X'X
for (i in 1:length(sm$S)) {
maS <- mean(abs(sm$S[[i]])) / maXX
maS <- norm(sm$S[[i]])/maXX ## mean(abs(sm$S[[i]])) / maXX
sm$S[[i]] <- sm$S[[i]] / maS
sm$S.scale[i] <- maS ## multiply S[[i]] by this to get original S[[i]]
}
......
mgcv (1.8-11-1) unstable; urgency=low
* New upstream release
-- Dirk Eddelbuettel <edd@debian.org> Wed, 27 Jan 2016 15:51:46 -0600
mgcv (1.8-10-1) unstable; urgency=low
* New upstream release
......
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
......@@ -19,10 +19,10 @@ covariate values and C code level parallelization (controlled by the \code{nthre
\usage{
bam(formula,family=gaussian(),data=list(),weights=NULL,subset=NULL,
na.action=na.omit, offset=NULL,method="fREML",control=list(),
scale=0,gamma=1,knots=NULL,sp=NULL,min.sp=NULL,paraPen=NULL,
chunk.size=10000,rho=0,AR.start=NULL,discrete=FALSE,sparse=FALSE,
cluster=NULL,nthreads=NA,gc.level=1,use.chol=FALSE,samfrac=1,
drop.unused.levels=TRUE,G=NULL,fit=TRUE,...)
select=FALSE,scale=0,gamma=1,knots=NULL,sp=NULL,min.sp=NULL,
paraPen=NULL,chunk.size=10000,rho=0,AR.start=NULL,discrete=FALSE,
sparse=FALSE,cluster=NULL,nthreads=NA,gc.level=1,use.chol=FALSE,
samfrac=1,drop.unused.levels=TRUE,G=NULL,fit=TRUE,...)
}
%- maybe also `usage' for other objects documented here.
......@@ -74,6 +74,10 @@ of the scale. \code{"ML"} and \code{"P-ML"} are similar, but using maximum likel
\item{control}{A list of fit control parameters to replace defaults returned by
\code{\link{gam.control}}. Any control parameters not supplied stay at their default values.}
\item{select}{Should selection penalties be added to the smooth effects, so that they can in principle be
penalized out of the model? Has the side effect that smooths no longer have a fixed effect component (improper prior
from a Bayesian perspective) allowing REML comparison of models with the same fixed effect structure.
}
\item{scale}{ If this is positive then it is taken as the known scale parameter. Negative signals that the
scale paraemter is unknown. 0 signals that the scale parameter is 1 for Poisson and binomial and unknown otherwise.
......
\name{bug.reports.mgcv}
\alias{bug.reports.mgcv}
%- Also NEED an `\alias' for EACH other topic documented here.
\title{Reporting mgcv bugs.}
\description{\code{mgcv} works largely because many people have reported bugs over the years. If you find something that looks like a bug, please report it, so that the package can be improved. \code{mgcv} does not have a large development budget, so it is a big help if bug reports follow the following guidlines.
The ideal report consists of an email to \email{simon.wood@r-project.org} with a subject line including \code{mgcv} somewhere, containing
\enumerate{
\item The results of running \code{\link{sessionInfo}} in the R session where the problem occurs. This provides platform details, R and package version numbers, etc.
\item A brief description of the problem.
\item Short cut and paste-able code that produces the problem, including the code for loading/generating the data (using standard R functions like \code{load}, \code{read.table} etc).
\item Any required data files. If you send real data it will only be used for the purposes of de-bugging.
}
Of course if you have dug deeper and have an idea of what is causing the problem, that is also helpful to know, as is any suggested code fix. (Don't send a fixed package .tar.gz file, however - I can't use this).
}
\author{ Simon N. Wood \email{simon.wood@r-project.org}}
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
......@@ -119,7 +119,7 @@ Wood, S.N. (in press) Just Another Gibbs Additive Modeller: Interfacing JAGS and
Marra, G. and S.N. Wood (2011) Practical variable selection for generalized additive models.
Computational Statistics & Data Analysis 55(7): 2372-2387
Here is a key early reference to smoothing using BUGS (although the approach and smooths used are different to jagam)
Here is a key early reference to smoothing using BUGS (although the approach and smooths used are a bit different to jagam)
Crainiceanu, C. M. D Ruppert, & M.P. Wand (2005) Bayesian Analysis for Penalized Spline Regression Using WinBUGS Journal of Statistical Software 14.
}
......@@ -133,7 +133,7 @@ Gibb's sampling is a very slow inferential method for standard GAMs. It is only
Check that the parameters of the priors on the parameters are fit for your purpose.
}
\seealso{\code{\link{gam}, \link{gamm}}
\seealso{\code{\link{gam}}, \code{\link{gamm}}, \code{\link{bam}}
}
\examples{
......
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
......@@ -3,11 +3,12 @@
\title{Parallel computation in mgcv.
}
\description{
\code{mgcv} can make some use of multiple cores or a cluster. Function \code{\link{bam}} uses the
facilities provided in the \link[parallel]{parallel} package for this purpose. See examples below. Note that most multi-core machines are memory bandwidth limited, so parallel speed up tends to be rather variable.
\code{\link{bam}} can also use an alternative openMP based parallelization approach alongside discretisation of covariates to
achieve substantial speed ups. This is selected using the \code{discrete=TRUE} option to \code{bam}, with the number of threads controlled via the \code{nthreads} argument. See example below.
\code{mgcv} can make some use of multiple cores or a cluster.
\code{\link{bam}} can use an openMP based parallelization approach alongside discretisation of covariates to achieve substantial speed ups. This is selected using the \code{discrete=TRUE} option to \code{bam}, withthe number of threads controlled via the \code{nthreads} argument. This is the approach that scales best. See example below.
Alternatively, function \code{\link{bam}} can use the facilities provided in the \link[parallel]{parallel} package. See examples below. Note that most multi-core machines are memory bandwidth limited, so parallel speed up tends to be rather variable.
Function \code{\link{gam}} can use parallel threads on a (shared memory) multi-core
machine via \code{openMP} (where this is supported). To do this, set the desired number of threads by setting \code{nthreads} to the number of cores to use, in the \code{control} argument of \code{\link{gam}}. Note that, for the most part, only the dominant \eqn{O(np^2)}{O(np^2)} steps are parallelized (n is number of data, p number of parameters). For additive Gaussian models estimated by GCV, the speed up can be disappointing as these employ an \eqn{O(p^3)}{O(p^3)} SVD step that can also have substantial cost in practice.
......
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
File mode changed from 100644 to 100755
This diff is collapsed.
......@@ -20,10 +20,10 @@ R_CallMethodDef CallMethods[] = {
};
R_CMethodDef CEntries[] = {
{"diagXVXt", (DL_FUNC) &diagXVXt,15},
{"XWXd", (DL_FUNC) &XWXd,17},
{"XWyd", (DL_FUNC) &XWyd,17},
{"Xbd", (DL_FUNC) &Xbd,14},
{"diagXVXt", (DL_FUNC) &diagXVXt,16},
{"XWXd", (DL_FUNC) &XWXd,18},
{"XWyd", (DL_FUNC) &XWyd,18},
{"Xbd", (DL_FUNC) &Xbd,15},
{"vcorr", (DL_FUNC) &vcorr, 5},
{"dchol", (DL_FUNC) &dchol, 4},
{"mgcv_omp", (DL_FUNC) &mgcv_omp, 1},
......
......@@ -290,7 +290,7 @@ int mgcv_bchol(double *A,int *piv,int *n,int *nt,int *nb) {
+ve semi definite matrix and piv is pivot sequence.
*/
int i,j,k,l,q,r=-1,*pk,*pq,jb,n1,m,N,*a,b;
double tol=0.0,*dots,*pd,*p1,*Aj,*Aj1,*Ajn,xmax,x,*Aq,*Ajj,*Aend;
double tol=0.0,*dots,*pd,*p1,*Aj,*Aq0,*Aj0,*Aj1,*Ajn,*Ail,xmax,x,*Aq,*Ajj,*Aend;
dots = (double *)CALLOC((size_t) *n,sizeof(double));
for (pk = piv,i=0;i < *n;pk++,i++) *pk = i; /* initialize pivot record */
jb = *nb; /* block size, allowing final to be smaller */
......@@ -341,8 +341,10 @@ int mgcv_bchol(double *A,int *piv,int *n,int *nt,int *nb) {
Aq = Aj + k; /* Lucas (2004) has '1' in place of 'k' */
Aj += j;
Aj1 = Ajn + k; /* Lucas (2004) has '1' in place of 'k' */
for (;Aj<Aend;Aj += *n,Aq += *n)
for (pd = Aj1,p1=Aq;pd < Ajj;pd++,p1++) *Aj -= *pd * *p1;
for (;Aj<Aend;Aj += *n,Aq += *n) {
for (pd = Aj1,p1=Aq;pd < Ajj;pd++,p1++) *Aj -= *pd * *p1;
//*Aj -= x;
}
}
if (j < *n) {
Aj = Ajj; x = *Aj;Aj += *n;
......@@ -364,19 +366,29 @@ int mgcv_bchol(double *A,int *piv,int *n,int *nt,int *nb) {
if (a[i]<=a[i-1]) a[i] = a[i-1]+1;
}
#ifdef SUPPORT_OPENMP
#pragma omp parallel private(b,i,l,Aj,Aend,Aq,Aj1) num_threads(m)
#pragma omp parallel private(b,i,l,Aj,Aend,Aq,Aj1,Ail,Aj0,Aq0) num_threads(m)
#endif
{ /* start parallel section */
#ifdef SUPPORT_OPENMP
#pragma omp for
#endif
for (b=0;b<m;b++)
for (i=a[b];i<a[b+1];i++) for (l=i;l<*n;l++) {
Aj = A + i * *n;Aend = Aj + j;Aj1 = Aj + l;
Aj+=k;Aq = A + l * *n + k; /* Lucas (2004) has '1' in place of 'k' */
for (;Aj < Aend;Aj++,Aq++) *Aj1 -= *Aq * *Aj;
A[i + *n * l ] = *Aj1;
}
for (i=a[b];i<a[b+1];i++) {
Aj0 = A + i * *n;
Aq0 = Aj0 + k;
Aend = Aj0 + j;
Ail = Aj1 = Aj0 + i;
Aj0 += k;
for (l=i;l<*n;l++,Aq0 += *n,Aj1++,Ail += *n) {
// Aj = A + i * *n;Aend = Aj + j;Aj1 = Aj + l;
//Aj+=k;Aq = A + l * *n + k; /* Lucas (2004) has '1' in place of 'k' */
Aj = Aj0;
Aq = Aq0;
for (;Aj < Aend;Aj++,Aq++) *Aj1 -= *Aq * *Aj;
// A[i + *n * l ] = *Aj1;
*Ail = *Aj1;
}
}
} /* end parallel section */
} /* if (k + jb < *n) */
} /* k loop */
......
......@@ -6,7 +6,9 @@
documentation in `Writing R extensions', but is apparently
intentional. However, most compilers with openMP support supply
a pre-defined compiler macro _OPENMP. So... */
#if (!defined SUPPORT_OPENMP && defined _OPENMP)
/* #if (!defined SUPPORT_OPENMP && defined _OPENMP)
...update: Rconfig.h:SUPPORT_OPENMP deprecated from R 2.3.2 */
#if defined _OPENMP
#define SUPPORT_OPENMP 1
#endif
/* ... note also that there is no actual *need* to protect #pragmas with
......@@ -78,15 +80,15 @@ void mvn_ll(double *y,double *X,double *XX,double *beta,int *n,int *lpi,