Predict Directive
Underlying Principles
Our approach to prediction is a generalisation of that of Lane and Nelder (1982) who
consider fixed effects models. They form fitted values for all
combinations of the explanatory variables in the model, then take
marginal means across the explanatory variables not relevent to the
current prediction. Our case is more general in that we also consider
the case of
associated factors
and
options for random effects that appear in our (mixed) models.
A full description can be found in
Gilmour et al. (2004)
and Welham et al. (2004).
Terms in the model may be fitted as fixed or random, and are formed
from explanatory variables which are either factors or covariates.
For this exposition, we define a fixed factor as an explanatory
variable which is a factor and appears in the model in terms that are
fixed (it may also appear in random terms), a random factor as an explanatory
variable which is a factor and appears in the model only in terms that are
fitted as random effects. Covariates generally appear in fixed terms
but may appear in random terms as well (random regression).
In special cases they may appear only in random terms.
Random factor terms may contribute to predictions in several
ways. They may be evaluated at values specified by the user,
they may be averaged over, or they may be ignored
(omitting all model terms that involve the factor from the prediction).
Averaging over the set of random
effects gives a prediction specific to the random effects observed. We
call this a `conditional' prediction. Omitting the term from
the model produces a prediction at the population average (zero), that
is, substituting the assumed population mean for an unknown random
effect. We call this a `marginal' prediction. Note that in any
prediction, some terms may be evaluated as conditional and others at
marginal values, depending on the aim of prediction.
For fixed factors there is no pre-defined population average, so there
is no natural interpretation for a prediction derived by omitting a
fixed term from the fitted values. Therefore any prediction will be either for
specific levels of the fixed factor, or averaging (in some way) over the levels
of the fixed factor. The prediction will therefore involve all fixed model terms.
Covariates must be evaluated at specified values. If interest lies in
the relationship of the response variable to the covariate, predict
a suitable grid of covariate values to reveal the relationship.
will be across a suitable grid of covariate values to reveal the relationship.
Otherwise, predict at an average or typical value of the covariate.
Omission of a
covariate from the predictive model is equivalent to predicting at a
zero covariate value, which is often not appropriate (unless the covariate is centred).
Before considering some examples in detail, it is useful to consider
the conceptual steps involved in the prediction process. Given the
explanatory variables used to define the linear (mixed) model, the four main
steps are
a:
Choose the explanatory variable(s) and their respective
value(s)/level(s) for which predictions are required; the variables
involved will be referred to as the classify set and together
define the multiway table to be predicted.
Include only one from any set
of associated factors in the classify set.
b:
Note which of the remaining variables will be averaged over,
the averaging set,
and which will be ignored, the ignored set. The averaging set will include
all remaining variables involved in the fixed model but not
in the classify set. Ignored variables may be explicitly
added to the averaging set.
The combination of the classify set with these
averaging variables defines a multiway hyper-table.
All associated factors appear in this hypertable regardless
of whether they are fitted as fixed or random.
Note that
variables evaluated at only one value, for example, a covariate at its
mean value, can be formally introduced as part of the classify or averaging set.
c:
Determine which terms from the linear mixed model are to be
used in forming predictions for each cell in the multiway hyper-table
in order to give appropriate conditional or marginal prediction.
That is, you may choose to ignore some random terms in addition to those
ignored because they involve variables in the ignored set.
d:
Choose the weights to be used when averaging cells in the
hyper-table to produce the multiway table to be reported.
Operationally, ASReml does the averaging in the prediction design matrix
rather than actually predicting the cells of the hypertable
and then averaging them.
The main difference in this prediction process compared to that
described by Lane and Nelder (1982) is the choice of whether to
include or exclude model terms when forming predictions. In linear
models, since all terms are fixed, terms not in the classify set must
be in the averaging set, and all terms must contribute to the predictions.
Back to PREDICT
Return to index