Characterization and quantification of dynamic measurements is an ongoing area of research in the metrological community, as new calibration methods are being developed to address dynamic measurement applications. In the development undertaken to date, one largely assumes that nominally linear transducers can be used with linear assumptions in deconvolution of the input from the response and in system identification. To quantify the errors that arise from these assumptions, in this article, the effects of weak nonlinearities in transducers that are assumed to behave linearly during dynamic excitations are studied. Specifically, a set of first-order and second-order systems, which can model many transducers with weak nonlinearities, are used to numerically quantify the systemic errors due to the linear assumptions underlying the deconvolution. We show through the presented results the evolution of different error metrics over a large parameter space of possible transducers. Additionally, an example of quantification of the errors due to linear assumptions in system identification is demonstrated by using a time-series sparse regression system identification strategy. It is shown that the errors generated from linear identification of a nonlinear transducer can counteract the systemic errors that arise in linear deconvolution when the linear system identification is performed in similar loading conditions. In general, the methodology and results presented here can be useful for understanding the effect of nonlinearity in single degree of freedom transient dynamics deconvolution and specifically in specifying certain metrics of errors in transducers with known weak nonlinearities.