HP 3000 Manuals

Validating a Forecast [ HP RXForecast Users Manual for MPE Systems ] MPE/iX 5.0 Documentation


HP RXForecast Users Manual for MPE Systems

Validating a Forecast 

Validating a model means confirming that it reasonably represents the
system that it is designed to represent.  In the case of a forecasting
model, this means testing the model to see if it seems likely that the
forecast it generates will be accurate enough to be useful.

You can use any or all of the following tools to validate the forecasting
model.  It is not necessary to use all of them.  In many cases, you can
determine forecast validity by using just a few of the validity tools
listed below.

   *   Common sense. 

       Common sense is easily the single most important tool for testing
       a forecast's validity.  The experienced system manager understands
       the variables that affect system performance and knows what
       behavior is reasonable.  Using reasonability tests helps one avoid
       ludicrous results.

The following empirical (experimental) method allows you to test a
forecasting method by comparing a forecast it produces (for a period of
time in the past) with what really happened.

   *   Validate date. 

       The Validate date option enables you to ignore a set of data
       during the forecast, and then compare the resulting prediction
       with the previously ignored (validate) data.  If the trending
       method is valid, there should be a strong correlation between the
       forecast data and the validate data.

       If the forecast and validate data differ significantly, the
       forecasting method was not appropriate and the resulting forecast
       is invalid.  For instance, in the example in "Examining a CPU
       Utilization Case" , a forecast proved invalid, but was
       explained by too little resource to allow growth.

       See "Dates Option" .

   *   Intervals. 

       Intervals, which give upper and lower bounds on a forecast, can
       help you validate a forecast.  A narrow interval (small spread
       between the upper and lower bounds) tends to indicate a valid or
       credible forecast.  Conversely, a broad interval may indicate a
       less credible forecast.  Intervals--whether prediction or
       confidence--allow for best-case and worst-case analysis.

   *   Statistical measures. 

       From every forecast graph you can generate a textual report (using
       the Stats command) that displays, among other information, the
       following statistical measures:

          *   R-Squared.

          *   T-Statistic.

          *   T-Probability.

          *   Mean Squared Error (MSE) and Standard Error.

       These statistical measures can be used as an indication of the
       validity of the forecast.  For a detailed explanation of the
       statistical measures, see "Statistical Measures"  in this
       chapter.

   *   Residual analysis. 

       You can do a residual analysis by exporting the forecast graph to
       a Lotus 1-2-3 worksheet.  A named graph called RESIDUALS plots the
       residuals (the difference between the forecast and actuals).

You can use one, several, or all of these validity checks to analyze your
forecast.  Typically, you will use common sense, confidence intervals,
and the Stats report to validate your forecasts.

If the forecast fails any of the validity checks, it is not valid.  You
can either repeat the forecasting process to try to develop a valid
forecast, or you may decide that a valid forecast is not feasible.  It is
possible that the computer system has behaved in an unpredictable manner
during the measurement period that you selected.



MPE/iX 5.0 Documentation