We consider a class of stochastic nondifferentiable optimization problems where the objective function is an expectation of a random convex function, that is not necessarily differentiable. We propose a local smoothing technique, based on random local perturbations of the objective function, that lead to differentiable approximations of the function. Under the assumption that the local randomness originates from a uniform distribution, we establish a Lipschitzian property for the gradient of the approximation. This facilitates the development of a stochastic approximation framework, which now requires sampling in the product space of the original measure and the artificially introduced distribution. We show that under suitable assumptions, the resulting diminishing steplength stochastic subgradient algorithm, with two samples per iteration, converges to an optimal solution of the problem when the subgradients are bounded.