We consider a constrained Nash-Cournot oligopoly where the demand function is linear. While cost functions and capacities are public information, firms only have partial information regarding the demand function. Specifically, firms either know the intercept or the slope of the demand function and cannot observe aggregate output. We consider a learning process in which firms update their profit-maximizing quantities and their beliefs regarding the unknown demand function parameters, based on disparities between observed and estimated prices. A characterization of the mappings, corresponding to the fixed point of the learning process, is provided. This result paves the way for developing a Tikhonov regularization scheme that is shown to learn the correct equilibrium, in spite of the multiplicity of equilibria. Despite the absence of monotonicity of the gradient maps, we prove the convergence of constant and diminishing steplength distributed gradient schemes under a suitable caveat on the starting points. Notably, precise rate of convergence estimates are provided for the constant steplength schemes.