Computer simulations are valued in science and engineering because they enable us to gain knowledge about phenomena that would otherwise be difficult to understand. Dependency on simulations primarily stems from our inability to conduct a sufficient number of experiments within the desired settings or with sufficient detail. However, if one were able to conduct a large-enough number of experiments, it is reasonable to envision that a simulation model could be calibrated to the point that its predictive uncertainty is reduced down to uncontrolled, natural variability. We inductively conclude that, as new experimental information is used for calibration, the calibrated parameters should stabilize, and thus, the disagreement between simulations and experiments should be reduced down to "true" bias. We propose to use the stabilization of the incremental improvement to assess the predictive maturity of a model. Accordingly, we develop a Prediction Convergence Index (PCI) that approximates the convergence of predictions to their "true" or stabilized values or, conversely, can be used to estimate the number of experimental tests that would be required to reach stabilization of predictions. The application of the PCI is illustrated using a Preston-Tonks-Wallace material model for Tantalum and six experimental datasets in the form of strain and stress curves. Once the predictive maturity of a model has been assessed, we argue that it is acceptable to extrapolate its predictions away from settings or regimes where validation tests have been conducted as long as the physics involved and modeled by the code remains unchanged. For the given model, the extent to which extrapolation and interpolation are acceptable is investigated. The results agree with our hypothesis and suggest that the approach proposed can prove useful for claiming completion of the calibration phase and providing insight into the predictive maturity of numerical models.