This paper presents a learning-based (i.e., data-driven) approach to motion planning of robotic systems. This is motivated by controller synthesis problems for safety critical systems where an accurate estimate of the uncertainties (e.g., unmodeled dynamics, disturbance) can improve the performance of the system. The state-space of the system is built by sampling from the state-set as well as the input set of the underlying system. The robust adaptive motion planning problem is modeled as a learning-based approach evasion differential game, where a machine-learning algorithm is used to update the statistical estimates of the uncertainties from system observations. The system begins with a conservative estimate of the uncertainty set to ensure safety of the underlying system and we relax the robustness constraints as we get better estimates of the unmodeled uncertainty. The estimates from the machine learning algorithm are used to refine the estimates of the controller in an anytime fashion. We show that the values for the game converges to the optimal values with known disturbance given the statistical estimates on the uncertainty converges. Using confidence intervals for the unmodeled disturbance estimated by the machine learning estimator during the transient learning phase, we are able to guarantee safety of the robotic system with the proposed algorithms during transience.