Motivated by big data applications, we consider unconstrained stochastic optimization problems. Stochastic quasi-Newton methods have proved successful in addressing such problems. However, in both convex and non-convex regimes, most existing convergence theory requires the gradient mapping of the objective function to be Lipschitz continuous, a requirement that might not hold. To address this gap, we consider problems with not necessarily Lipschitzian gradients. Employing a local smoothing technique, we develop a smoothing stochastic quasi-Newton (S-SQN) method. Our main contributions are three-fold: (i) under suitable assumptions, we show that the sequence generated by the S-SQN scheme converges to the unique optimal solution of the smoothed problem almost surely; (ii) we derive an error bound in terms of the smoothed objective function values; and (iii) to quantify the solution quality, we derive a bound that relates the iterate generated by the S-SQN method to the optimal solution of the original problem.