In this paper, we study the optimal sampling policy for an energy harvesting sensing system, which is designed to estimate a wide-sense stationary random process by using discrete-time samples collected by a sensor. The energy in the sensor is consumed by taking observations and is replenished randomly with energy harvested from the ambient environment. Our goal is to identify the optimal sampling policy that minimizes the estimation mean squared error (MSE) under stochastic energy constraints. The problem can be formulated as a stochastic programming problem, which is generally difficult to solve. We identify an asymptotically optimal solution to the problem by exploiting the properties of random processes with power-law decaying covariance. Specifically, with the help of a newly derived inverse covariance matrix of the random process, it is discovered that the linear minimum MSE (MMSE) estimation of the random process demonstrates a Markovian property. That is, the optimal estimation of any point in a time segment bounded by two consecutive samples can be achieved by using the knowledge of only the two bounding samples while ignoring all other samples. Such a Markovian property enables us to identify a lower bound of the long term average MSE. Motivated by the structure of the MSE lower bound, we then propose a simple best-effort sampling scheme by considering the stochastic energy constraints. It is shown that the best-effort sampling scheme is asymptotically optimal in the sense that, for almost every energy harvesting sample path, it achieves the MSE lower bound as time becomes large.