Commit 043b3207 authored by Dirk Eddelbuettel's avatar Dirk Eddelbuettel

Import Upstream version 1.5-6

parent 6b79c8d5
Package: mgcv
Version: 1.5-5
Version: 1.5-6
Author: Simon Wood <simon.wood@r-project.org>
Maintainer: Simon Wood <simon.wood@r-project.org>
Title: GAMs with GCV/AIC/REML smoothness estimation and GAMMs by PQL
......@@ -12,6 +12,6 @@ Imports: graphics, stats, nlme
Suggests: nlme (>= 3.1-64), splines
LazyLoad: yes
License: GPL (>= 2)
Packaged: 2009-05-13 15:40:09 UTC; simon
Packaged: 2009-09-11 09:31:19 UTC; simon
Repository: CRAN
Date/Publication: 2009-05-15 11:26:39
Date/Publication: 2009-09-12 12:18:28
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
** denotes quite substantial/important changes
*** denotes really big changes
ISSUES
------
1.5-6
* "ts" and "cs" modified so that zero eigen values of penalty
matrix are reset to 10% of smallest strictly positive eigen
value, rather than 1%. This seems to lead to more reliable
performance.
* `bfgs' simplified and improved so that it now checks the Wolfe
conditions are met at each step. No longer uses any Newton steps,
so if it's used with gam.control(outerPIsteps=0) then it's
first derivative only for smoothing parameter optimization.
* `outerPIsteps' now defaults to zero in `gam.control'.
* New routine `initial.spg' gets jth initial sp to equalize
Frobenious norm of S_j and cols of sqrt(W)X which it penalizes,
where W are initial fisher weights. This removes the need for a
performance iteration step to get starting values (so
outerPIsteps=0 in gam.control can now bypass PI completely).
* fscale set from get.null.coef (facilitates cleaner initialization).
* large data set rare event logistic regression example added to
?gam.
* For p-value calculation for smooths, summary.gam subsamples rows of
the model matrix if it has more than 3000 rows. This speeds things
up for large datasets.
* minor bug fix in `gamm' so that intercept gets correct name, if
it's the only non-smooth fixed effect.
* .pot files updated, de translation added.
* `in.out' was not working from 1.5 --- fixed.
* loglik.gam now ups parameter count for Tweedie by one to account for
scale estimation.
* There was a bug in the calculation of the Bayesian cov matrix, when the
scale parameter was known: it was always using an estimated scale
parameter. Makes no statistically meaningful difference for a model
that fits propoerly, of course.
* Some junk removed from gam object.
* summary.gam pseudoinversion made slightly more efficient.
* adaptive smooth constructor is a bit more careful about the ranks
of the penalties.
* 2d adaptive smoother bug fix --- part of penalty was missing due
to complete line error.
* `smoothCon' and `PredictMat' modified so that sparse smooths can
optionally have sparse centering constraints applied.
* `gamm' fix: prediction and visualization from `x$gam' where x is a
fitted `gamm' object should not require the random effects to be
provided. Now it doesn't.
* minor bug fix: a model with no penalties except a fixed one would fail
with an index error.
* `te' terms are now only subject to centering constraints if all
marginals would usually have a centering constraint.
* `te' no longer resets multi-dimensional marginals to "tp", unless
they have been set to "cr", "cs", "ps" or "cp". This allows tensor
products with user supplied smooths.
* Example of obtaining derivatives of a smooth (with CIs) added to
`predict.gam' help file.
* `newdata.guaranteed' argument to predict.gam didn't work. fixed.
* Some error message translation files need updating.
1.5-5
......
......@@ -141,7 +141,7 @@ would be required to fit is returned is returned. See argument \code{G}.}
\item{G}{Usually \code{NULL}, but may contain the object returned by a previous call to \code{gam} with
\code{fit=FALSE}, in which case all other arguments are ignored except for
\code{gamma}, \code{in.out}, \code{control}, \code{method} and \code{fit}.}
\code{gamma}, \code{in.out}, \code{scale}, \code{control}, \code{method} \code{optimizer} and \code{fit}.}
\item{in.out}{optional list for initializing outer iteration. If supplied
then this must contain two elements: \code{sp} should be an array of
......@@ -199,7 +199,7 @@ the scale parameter and
\eqn{DoF}{DoF} the effective degrees of freedom of the model. Notice that UBRE is effectively
just AIC rescaled, but is only used when \eqn{s}{s} is known.
Alternatives are GACV, or a Laplace approximation to REML, there
Alternatives are GACV, or a Laplace approximation to REML. There
is some evidence that the latter may actually be the most effective choise.
Smoothing parameters are chosen to
......@@ -263,7 +263,7 @@ generalized additive models. J. Amer. Statist. Ass. 99:673-686. [Default
method for additive case (but no longer for generalized)]
Wood, S.N. (2008) Fast stable direct fitting and smoothness selection for generalized
additive models. J.R.Statist.Soc.B 70(3):495-518. [Generalized additive model case]
additive models. J.R.Statist.Soc.B 70(3):495-518. [Generalized additive model methods]
Wood, S.N. (2003) Thin plate regression splines. J.R.Statist.Soc.B 65(1):95-114
......@@ -383,6 +383,7 @@ print(b0);print(bp)
## now a GAM with 3df regression spline term & 2 penalized terms
b0<-gam(y~s(x0,k=4,fx=TRUE,bs="tp")+s(x1,k=12)+s(x2,k=15),data=dat)
plot(b0,pages=1)
......@@ -391,16 +392,27 @@ b1<-gam(y~s(x0,x1)+s(x2)+s(x3),data=dat)
par(mfrow=c(2,2))
plot(b1)
par(mfrow=c(1,1))
## now simulate poisson data...
dat <- gamSim(1,n=400,dist="poisson",scale=.25)
dat <- gamSim(1,n=4000,dist="poisson",scale=.1)
b2<-gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=poisson,data=dat)
## use "cr" basis to save time, with 4000 data...
b2<-gam(y~s(x0,bs="cr")+s(x1,bs="cr")+s(x2,bs="cr")+
s(x3,bs="cr"),family=poisson,data=dat,method="REML")
plot(b2,pages=1)
## repeat fit using performance iteration
## drop x3, but initialize sp's from previous fit, to
## save more time...
b2a<-gam(y~s(x0,bs="cr")+s(x1,bs="cr")+s(x2,bs="cr"),
family=poisson,data=dat,method="REML",
in.out=list(sp=b2$sp[1:3],scale=1))
par(mfrow=c(2,2))
plot(b2a)
par(mfrow=c(1,1))
## similar example using performance iteration
dat <- gamSim(1,n=400,dist="poisson",scale=.25)
b3<-gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=poisson,
data=dat,optimizer="perf")
......@@ -412,15 +424,15 @@ b4<-gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=poisson,
data=dat,method="GACV.Cp",scale=-1)
plot(b4,pages=1)
## repeat using REML as in Wood 2008...
## repeat using REML as in Wood 2009...
b5<-gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=poisson,
data=dat,method="REML")
plot(b5,pages=1)
## a binary example (see later for large dataset version)...
## a binary example
dat <- gamSim(1,n=400,dist="binary",scale=.33)
lr.fit <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=binomial,data=dat)
......@@ -455,7 +467,10 @@ vis.gam(b4) ## persp fit
detach(eg)
par(op)
## very large dataset example with user defined knots
##################################################
## largish dataset example with user defined knots
##################################################
par(mfrow=c(1,1))
eg <- gamSim(2,n=10000,scale=.5)
attach(eg)
......@@ -474,6 +489,41 @@ vis.gam(b6,color="heat")
b7 <- gam(y~s(x,z,k=50,xt=list(max.knots=1000,seed=2)),data=data)
vis.gam(b7)
detach(eg)
################################################################
## Approximate large dataset logistic regression for rare events
## based on subsampling the zeroes, and adding an offset to
## approximately allow for this.
## Doing the same thing, but upweighting the sampled zeroes
## leads to problems with smoothness selection, and CIs.
################################################################
n <- 100000 ## simulate n data
dat <- gamSim(1,n=n,dist="binary",scale=.33)
p <- binomial()$linkinv(dat$f-6) ## make 1's rare
dat$y <- rbinom(p,1,p) ## re-simulate rare response
## Now sample all the 1's but only proportion S of the 0's
S <- 0.02 ## sampling fraction of zeroes
dat <- dat[dat$y==1 | runif(n) < S,] ## sampling
## Create offset based on total sampling fraction
dat$s <- rep(log(nrow(dat)/n),nrow(dat))
lr.fit <- gam(y~s(x0,bs="cr")+s(x1,bs="cr")+s(x2,bs="cr")+s(x3,bs="cr")+
offset(s),family=binomial,data=dat,method="REML")
## plot model components with truth overlaid in red
op <- par(mfrow=c(2,2))
fn <- c("f0","f1","f2","f3");xn <- c("x0","x1","x2","x3")
for (k in 1:4) {
plot(lr.fit,select=k,scale=0)
ff <- dat[[fn[k]]];xx <- dat[[xn[k]]]
ind <- sort.int(xx,index.return=TRUE)$ix
lines(xx[ind],(ff-mean(ff))[ind]*.33,col=2)
}
par(op)
rm(dat)
}
\keyword{models} \keyword{smooth} \keyword{regression}%-- one or more ..
......
......@@ -14,7 +14,7 @@ gam.control(irls.reg=0.0,epsilon = 1e-06, maxit = 100,
mgcv.tol=1e-7,mgcv.half=15, trace = FALSE,
rank.tol=.Machine$double.eps^0.5,
nlm=list(),optim=list(),newton=list(),
outerPIsteps=1,idLinksBases=TRUE,scalePenalty=TRUE,
outerPIsteps=0,idLinksBases=TRUE,scalePenalty=TRUE,
keepData=FALSE)
}
\arguments{
......@@ -57,9 +57,7 @@ is used for outer estimation of smoothing parameters (not default). See details.
used for outer estimation of log smoothing parameters. See details.}
\item{outerPIsteps}{The number of performance interation steps used to
initialize outer iteration. Less than 1 means
that only one performance iteration step is taken to get the function scale,
but the corresponding smoothing parameter estimates are discarded. }
initialize outer iteration.}
\item{idLinksBases}{If smooth terms have their smoothing parameters linked via
the \code{id} mechanism (see \code{\link{s}}), should they also have the same
......@@ -93,7 +91,7 @@ If outer iteration using \code{\link{nlm}} is used for fitting, then the control
the number of significant digits in the GCV/UBRE score - by default this is
worked out from \code{epsilon}; (ii) \code{gradtol} is the tolerance used to
judge convergence of the gradient of the GCV/UBRE score to zero - by default
set to \code{100*epsilon}; (iii) \code{stepmax} is the maximum allowable log
set to \code{10*epsilon}; (iii) \code{stepmax} is the maximum allowable log
smoothing parameter step - defaults to 2; (iv) \code{steptol} is the minimum
allowable step length - defaults to 1e-4; (v) \code{iterlim} is the maximum
number of optimization steps allowed - defaults to 200; (vi)
......
......@@ -28,7 +28,8 @@ cases, not at the MLE.}
\item{call}{the matched call (allows \code{update} to be used with \code{gam} objects, for example). }
\item{cmX}{column means of the model matrix --- useful for componentwise CI calculation.}
\item{cmX}{column means of the model matrix (with elements corresponding to smooths set to zero )
--- useful for componentwise CI calculation.}
\item{coefficients}{the coefficients of the fitted model. Parametric
coefficients are first, followed by coefficients for each
......
......@@ -6,7 +6,10 @@
data, by a call to \code{lme} in the normal errors identity link case, or by
a call to \code{gammPQL} (a modification of \code{glmmPQL} from the \code{MASS} library) otherwise.
In the latter case estimates are only approximately MLEs. The routine is typically
slower than \code{gam}, and not quite as numerically robust.
slower than \code{gam}, and not quite as numerically robust.
To use \code{lme4} in place of \code{nlme} as the underlying fitting engine,
see \code{gamm4} from package \code{gamm4}.
Smooths are specified as in a call to \code{\link{gam}} as part of the fixed
effects model formula, but the wiggly components of the smooth are treated as
......@@ -127,7 +130,7 @@ information on this option. Failing that, you can try increasing the
\code{niterEM} option in \code{control}: this will perturb the starting values
used in fitting, but usually to values with lower likelihood! Note that this
version of \code{gamm} works best with R 2.2.0 or above and \code{nlme}, 3.1-62 and above,
since these use an improved optimizer.
since these use an improved optimizer.
}
......@@ -200,7 +203,8 @@ Models must contain at least one random effect: either a smooth with non-zero
smoothing parameter, or a random effect specified in argument \code{random}.
\code{gamm} is not as numerically stable as \code{gam}: an \code{lme} call
will occasionally fail. See details section for suggestions.
will occasionally fail. See details section for suggestions, or try the
`gamm4' package.
\code{gamm} is usually much slower than \code{gam}, and on some platforms you may need to
increase the memory available to R in order to use it with large data sets
......@@ -218,14 +222,14 @@ them above and below by an effective infinity and effective zero. See
\code{\link{notExp2}} for details of how to change this.
Linked smoothing parameters and adaptive smoothing are not supported.
}
\seealso{\code{\link{magic}} for an alternative for correlated data,
\code{\link{te}}, \code{\link{s}},
\code{\link{predict.gam}},
\code{\link{plot.gam}}, \code{\link{summary.gam}}, \code{\link{negbin}},
\code{\link{vis.gam}},\code{\link{pdTens}}
\code{\link{vis.gam}},\code{\link{pdTens}}, \code{gamm4} (
\url{http://cran.r-project.org/package=gamm4})
}
\examples{
......@@ -252,6 +256,8 @@ b2<-gamm(y~s(x0)+s(x1)+s(x2)+s(x3),family=poisson,
plot(b2$gam,pages=1)
fac <- dat$fac
rm(dat)
vis.gam(b2$gam)
## now an example with autocorrelated errors....
n <- 400;sig <- 2
......@@ -306,7 +312,7 @@ b <- gamm(y~s(x0,bs="cr")+s(x1,bs="cr")+s(x2,bs="cr")+
s(x3,bs="cr"),data=dat,random=list(fa=~1,fb=~1),
correlation=corAR1())
plot(b$gam,pages=1)
vis.gam(b$gam)
## and a "spatial" example...
library(nlme);set.seed(1);n <- 200
dat <- gamSim(2,n=n,scale=0) ## standard example
......
......@@ -16,8 +16,8 @@ value decomposition approach.
} %- end description
\usage{
magic(y,X,sp,S,off,L=NULL,lsp0=NULL,rank=NULL,H=NULL,C=NULL,w=NULL,gamma=1,
scale=1,gcv=TRUE,ridge.parameter=NULL,
magic(y,X,sp,S,off,L=NULL,lsp0=NULL,rank=NULL,H=NULL,C=NULL,
w=NULL,gamma=1,scale=1,gcv=TRUE,ridge.parameter=NULL,
control=list(maxit=50,tol=1e-6,step.half=25,
rank.tol=.Machine$double.eps^0.5),extra.rss=0,n.score=length(y))
}
......
......@@ -9,7 +9,7 @@ by standard errors, based on the posterior distribution of the model
coefficients. The routine can optionally return the matrix by which the model
coefficients must be pre-multiplied in order to yield the values of the linear predictor at
the supplied covariate values: this is useful for obtaining credible regions
for quantities derived from the model, and for lookup table prediction outside
for quantities derived from the model (e.g. derivatives of smooths), and for lookup table prediction outside
\code{R} (see example code below).}
\usage{
......@@ -136,12 +136,16 @@ b<-gam(y~s(x0)+s(I(x1^2))+s(x2)+offset(x3),data=dat)
newd <- data.frame(x0=(0:30)/30,x1=(0:30)/30,x2=(0:30)/30,x3=(0:30)/30)
pred <- predict.gam(b,newd)
#############################################
## difference between "terms" and "iterms"
#############################################
nd2 <- data.frame(x0=c(.25,.5),x1=c(.25,.5),x2=c(.25,.5),x3=c(.25,.5))
predict(b,nd2,type="terms",se=TRUE)
predict(b,nd2,type="iterms",se=TRUE)
#########################################################
## now get variance of sum of predictions using lpmatrix
#########################################################
Xp <- predict(b,newd,type="lpmatrix")
......@@ -151,8 +155,11 @@ a <- rep(1,31)
Xs <- t(a) \%*\% Xp ## Xs \%*\% coef(b) gives sum of predictions
var.sum <- Xs \%*\% b$Vp \%*\% t(Xs)
#############################################################
## Now get the variance of non-linear function of predictions
## by simulation from posterior distribution of the params
#############################################################
library(MASS)
br<-mvrnorm(1000,coef(b),b$Vp) ## 1000 replicate param. vectors
......@@ -167,6 +174,7 @@ mean(res);var(res)
res <- colSums(log(abs(Xp \%*\% t(br))))
##################################################################
## The following shows how to use use an "lpmatrix" as a lookup
## table for approximate prediction. The idea is to create
## approximate prediction matrix rows by appropriate linear
......@@ -177,6 +185,7 @@ res <- colSums(log(abs(Xp \%*\% t(br))))
## gam *outside* R: all that is needed is the coefficient vector
## and the prediction matrix. Use larger `Xp'/ smaller `dx' and/or
## higher order interpolation for higher accuracy.
###################################################################
xn <- c(.341,.122,.476,.981) ## want prediction at these values
x0 <- 1 ## intercept column
......@@ -195,6 +204,40 @@ se <- sqrt(x0\%*\%b$Vp\%*\%t(x0));se ## get standard error
predict(b,newdata=data.frame(x0=xn[1],x1=xn[2],
x2=xn[3],x3=xn[4]),se=TRUE)
####################################################################
## Differentiating the smooths in a model (with CIs for derivatives)
####################################################################
## simulate data and fit model...
dat <- gamSim(1,n=300,scale=sig)
b<-gam(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat)
plot(b,pages=1)
## now evaluate derivatives of smooths with associated standard
## errors, by finite differencing...
x.mesh <- seq(0,1,length=200) ## where to evaluate derivatives
newd <- data.frame(x0 = x.mesh,x1 = x.mesh, x2=x.mesh,x3=x.mesh)
X0 <- predict(b,newd,type="lpmatrix")
eps <- 1e-7 ## finite difference interval
x.mesh <- x.mesh + eps ## shift the evaluation mesh
newd <- data.frame(x0 = x.mesh,x1 = x.mesh, x2=x.mesh,x3=x.mesh)
X1 <- predict(b,newd,type="lpmatrix")
Xp <- (X1-X0)/eps ## maps coefficients to (fd approx.) derivatives
colnames(Xp) ## can check which cols relate to which smooth
par(mfrow=c(2,2))
for (i in 1:4) { ## plot derivatives and corresponding CIs
Xi <- Xp*0
Xi[,(i-1)*9+1:9+1] <- Xp[,(i-1)*9+1:9+1] ## Xi\%*\%coef(b) = smooth deriv i
df <- Xi\%*\%coef(b) ## ith smooth derivative
df.sd <- rowSums(Xi\%*\%b$Vp*Xi)^.5 ## cheap diag(Xi\%*\%b$Vp\%*\%t(Xi))^.5
plot(x.mesh,df,type="l",ylim=range(c(df+2*df.sd,df-2*df.sd)))
lines(x.mesh,df+2*df.sd,lty=2);lines(x.mesh,df-2*df.sd,lty=2)
}
}
\keyword{models} \keyword{smooth} \keyword{regression}%-- one or more ..
......
......@@ -16,7 +16,7 @@ although this behaviour can be over-ridden.
\usage{
smoothCon(object,data,knots,absorb.cons=FALSE,
scale.penalty=TRUE,n=nrow(data),dataX=NULL,
null.space.penalty=FALSE)
null.space.penalty=FALSE,sparse.cons=0)
PredictMat(object,data,n=nrow(data))
}
%- maybe also `usage' for other objects documented here.
......@@ -39,14 +39,21 @@ should be constructed with another set of data provided in \code{dataX} --- \cod
be the same for both. Facilitates smooth id's.}
\item{null.space.penalty}{Should an extra penalty be added to the smooth which will penalize the
components of the smooth in the penalty null space: provides a way of penalizing terms out of the model altogether.}
\item{sparse.cons}{If \code{0} then default sum to zero constraints are used. If \code{1} then one
coefficient is set to zero as constraint for sparse smooths. If \code{2} then sparse coefficient sum to zero
constraints are used for sparse smooths. None of these options has an effect if the smooth supplies its own
constriant.}
}
\value{ From \code{smoothCon} a list of \code{smooth} objects returned by the
appropriate \code{\link{smooth.construct}} method function. If constraints are
to be absorbed then the objects will have attributes \code{"qrc"} and
\code{"nCons"}, the qr decomposition of the constraint matrix (returned by
\code{\link{qr}}) and the number of constraints, respectively: these are used in
the re-parameterization.
\code{"nCons"}. \code{"nCons"} is the number of constraints. \code{"qrc"} is
usually the qr decomposition of the constraint matrix (returned by
\code{\link{qr}}), but if it is a single positive integer it is the index of the
coefficient to set to zero, and if it is a negative number then this indicates that
the parameters are to sum to zero.
For \code{predictMat} a matrix which will map the parameters associated with
the smooth to the vector of values of the smooth evaluated at the covariate
......@@ -70,7 +77,8 @@ method handles \code{by} variables internally then the returned matrix should ha
Default centering constraints, that terms should sum to zero over the covariates, are produced unless
the smooth constructor includes a matrix \code{C} of constraints. To have no constraints (in which case
you had better have a full rank penalty!) the matrix \code{C} should have no rows.
you had better have a full rank penalty!) the matrix \code{C} should have no rows. There is an option to
use centering constraint that generate no, or limited infil, if the smoother has a sparse model matrix.
\code{smoothCon} returns a list of smooths because factor \code{by} variables result in multiple copies
of a smooth, each multiplied by the dummy variable associated with one factor level. \code{smoothCon} modifies
......
......@@ -39,7 +39,8 @@ basis code is given then this is used for all bases.}
\item{d}{array of marginal basis dimensions. For example if you want a smooth for 3 covariates
made up of a tensor product of a 2 dimensional t.p.r.s. basis and a 1-dimensional basis, then
set \code{d=c(2,1)}.}
set \code{d=c(2,1)}. Incompatibilities between built in basis types and dimension will be
resolved by resetting the basis type.}
\item{by}{a numeric or factor variable of the same dimension as each covariate.
In the numeric vector case the elements multiply the smooth evaluated at the corresponding
......@@ -110,6 +111,10 @@ variable. This parameterization can reduce numerical stability when used
with marginal smooths other than \code{"cc"}, \code{"cr"} and \code{"cs"}: if
this causes problems, set \code{np=FALSE}.
Note that tensor product smooths should not be centred (have identifiability constraints imposed)
if any marginals would not need centering. The constructor for tensor product smooths
ensures that this happens.
The function does not evaluate the variable arguments.
}
......@@ -158,7 +163,8 @@ generalized additive mixed models. Biometrics 62(4):1025-1036
}
\seealso{ \code{\link{s}},\code{\link{gam}},\code{\link{gamm}}}
\seealso{ \code{\link{s}},\code{\link{gam}},\code{\link{gamm}},
\code{\link{smooth.construct.tensor.smooth.spec}}}
\examples{
......
This diff is collapsed.
This diff is collapsed.
# Translation of mgcv.pot to German
# Copyright (C) 2005 The R Foundation
# This file is distributed under the same license as the mgcv package.
# Chris Leick <c.leick@vollbio.de>, 2009.
#
msgid ""
msgstr ""
"Project-Id-Version: R 2.9.2 / mgcv 1.5-5\n"
"Report-Msgid-Bugs-To: bugs@R-project.org\n"
"POT-Creation-Date: 2005-12-09 07:31+0000\n"
"PO-Revision-Date: 2009-09-03 15:25+0200\n"
"Last-Translator: Chris Leick <c.leick@vollbio.de>\n"
"Language-Team: German <debian-l10n-german@lists.debian.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
#: gcv.c:290
#, c-format
msgid ""
"Overall smoothing parameter estimate on upper boundary.\n"
"Boundary GCV score change: %g. Largest change: %g"
msgstr ""
"Gesamte Glättungsparameterschätzung an oberem Rand.\n"
"Rand GCV-Punktzahländerung: %g. Größte Änderung: %g"
#: gcv.c:875
msgid "resetting -ve inf"
msgstr "zurücksetzen -ve inf"
#: gcv.c:877
msgid "resetting +ve inf"
msgstr "zurücksetzen +ve inf"
#: gcv.c:1014
msgid ""
"Multiple GCV didn't improve autoinitialized relative smoothing parameters"
msgstr ""
"Mehrere GCV verbesserten nicht selbstinitialisierte relative "
"Glättungsparameter"
#: magic.c:magic
msgid "magic requires smoothing parameter starting values if L supplied"
msgstr "magic benötigt Glättungsparameter-Startwerte, wenn L angegeben"
#: magic.c:809
msgid "magic, the gcv/ubre optimizer, failed to converge after 400 iterations."
msgstr ""
"magic, der gcv/ubre-Optimierer, konvergierte nach 400 Iterationen noch nicht."
#: matrix.c:85
msgid "Failed to initialize memory for matrix."
msgstr "Initialisieren von Speicher für Matrix fehlgeschlagen."
#: matrix.c:147 matrix.c:210
msgid "An out of bound write to matrix has occurred!"
msgstr "Ein Schreiben außerhalb der Matrixgrenze ist aufgetreten!"
#: matrix.c:153
msgid "INTEGRITY PROBLEM in the extant matrix list."
msgstr "INTEGRITÄTSPROBLEM in der bestehenden Matrix-Liste."
#: matrix.c:186
msgid "You are trying to check matrix integrity without defining RANGECHECK."
msgstr ""
"Sie versuchen die Integrität der Matrix zu prüfen ohne RANGECHECK zu "
"definieren."
#: matrix.c:255
#, c-format
msgid ""
"\n"
"%s not found, nothing read ! "
msgstr ""
"\n"
"%s nicht gefunden, nichts gelesen! "
#: matrix.c:325
msgid "Target matrix too small in mcopy"
msgstr "Zielmatrix zu klein in mcopy"
#: matrix.c:345 matrix.c:353 matrix.c:366 matrix.c:374
msgid "Incompatible matrices in matmult."
msgstr "Inkompatible Matrizen in matmult."
#: matrix.c:480
msgid "Attempt to invert() non-square matrix"
msgstr "Versuch des Aufrufs von invert() für nicht-quadratische Matrix"
#: matrix.c:502
msgid "Singular Matrix passed to invert()"
msgstr "Singuläre Matrix an invert() übergeben"
#: matrix.c:655
msgid "Not a +ve def. matrix in choleski()."
msgstr "Keine +ve def.-Matrix in choleski()."
#: matrix.c:873
msgid "Error in Covariance(a,b) - a,b not same length."
msgstr "Fehler in Covariance(a,b) - a,b haben nicht die gleiche Länge"
#: matrix.c:1812
msgid "svd() not converged"
msgstr "svd() nicht konvergiert"
#: matrix.c:1968
#, c-format
msgid "%s not found by routine gettextmatrix().\n"
msgstr "%s wurde nicht von der Routine gettextmatrix() gefunden.\n"
#: matrix.c:2190
#, c-format
msgid "svdroot matrix not +ve semi def. %g"
msgstr "svdroot-Matrix nicht +ve def. %g"
#: matrix.c:2414
msgid "Sort failed"
msgstr "Sortieren fehlgeschlagen"
#: matrix.c:2542
msgid "eigen_tri() failed to converge"
msgstr "konvertieren von eigen_tri() fehlgeschlagen"
#: matrix.c:2698
#, c-format
msgid "eigenvv_tri() Eigen vector %d of %d failure. Error = %g > %g"
msgstr "eigenvv_tri() Eigen-Vektor %d von %d fehlgeschlagen. Fehler = %g > %g"
#: matrix.c:2832
msgid "Lanczos failed"
msgstr "Lanczos fehlgeschlagen"
#: mgcv.c:868
msgid ""
"Numerical difficulties obtaining tr(A) - apparently resolved. Apply some "
"caution to results."
msgstr ""
"Numerische Schwierigkeiten beim bestimmen von tr(A) - anscheinend gelöst. "
"Seien Sie bei den Ergebnissen vorsichtig."
#: mgcv.c:872
msgid "tr(A) utter garbage and situation un-resolvable."
msgstr "tr(A) völliger Müll und Situation nicht korrigierbar."
#: mgcv.c:873
msgid ""
"Numerical difficulties calculating tr(A). Not completely resolved. Use "
"results with care!"
msgstr ""
"Numerische Schwierigkeiten tr(A) zu berechnen. Nicht komplett gelöst. "
"Benutzen Sie die Ergebnisse mit Vorsicht."
#: mgcv.c:958
msgid "Termwise estimate degrees of freedom are unreliable"
msgstr "Termweises Schätzen der Freiheitsgrade ist unzuverlässig"
#: qp.c:59
msgid "ERROR in addconQT."
msgstr "FEHLER in addconQT."
#: qp.c:465
msgid "QPCLS - Rank deficiency in model"
msgstr "QPCLS - Rang-Defizit im Modell"
#: tprs.c:45
msgid "You must have 2m>d for a thin plate spline."
msgstr "Es muss 2m>d für eine dünnwandige Spline gelten"
#: tprs.c:99
msgid "You must have 2m > d"
msgstr "Sie müssen 2m > d haben"
#: tprs.c:357 tprs.c:367
msgid ""
"A term has fewer unique covariate combinations than specified maximum "
"degrees of freedom"
msgstr ""
"Ein Term hat weniger einheitliche Kombinationen von Kovarianten als maximal "
"angegebene Freiheitsgrade"
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR The R Foundation
# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
......@@ -7,8 +7,8 @@
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: bugs@R-project.org\n"
"POT-Creation-Date: 2005-12-09 07:31+0000\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2009-09-06 21:46+0100\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
......@@ -36,11 +36,11 @@ msgid ""
"Multiple GCV didn't improve autoinitialized relative smoothing parameters"
msgstr ""
#:magic.c:magic
#: magic.c:507
msgid "magic requires smoothing parameter starting values if L supplied"
msgstr ""
#: magic.c:809
#: magic.c:622
msgid "magic, the gcv/ubre optimizer, failed to converge after 400 iterations."
msgstr ""
......@@ -64,7 +64,7 @@ msgstr ""
#, c-format
msgid ""
"\n"
"%s not found, nothing read ! "
"%s not found, nothing read!"
msgstr ""
#: matrix.c:325
......@@ -100,25 +100,25 @@ msgstr ""
msgid "%s not found by routine gettextmatrix().\n"
msgstr ""
#: matrix.c:2190
#: matrix.c:2192
#, c-format
msgid "svdroot matrix not +ve semi def. %g"
msgstr ""
#: matrix.c:2414
#: matrix.c:2416
msgid "Sort failed"
msgstr ""
#: matrix.c:2542
#: matrix.c:2544
msgid "eigen_tri() failed to converge"
msgstr ""
#: matrix.c:2698
#: matrix.c:2700
#, c-format
msgid "eigenvv_tri() Eigen vector %d of %d failure. Error = %g > %g"
msgstr ""
#: matrix.c:2832
#: matrix.c:2834
msgid "Lanczos failed"
msgstr ""
......@@ -158,9 +158,8 @@ msgstr ""
msgid "You must have 2m > d"
msgstr ""
#: tprs.c:357 tprs.c:367
#: tprs.c:357 tprs.c:365
msgid ""
"A term has fewer unique covariate combinations than specified maximum "
"degrees of freedom"
msgstr ""
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment