The optimal routing of a vacant taxi is formulated as a Markov Decision Process (MDP) problem to account for long-term profit over the full working period. The state is defined by the node at which a vacant taxi is located, and action is the link to take out of the node. State transition probabilities depend on passenger matching probabilities and passenger destination probabilities. The probability that a vacant taxi is matched with a passenger during the traversal of a link is calculated based on temporal Poisson arrivals of passengers and spatial Poisson distributions of competing vacant taxis. Passenger destination probabilities are calculated directly using observed fractions of passengers going to destinations from a given origin. The MDP problem is solved by value iteration resulting in an optimal routing policy, and the computational efficiency is improved by utilizing parallelized matrix operations. The proposed model and an efficient implementation of the value iteration algorithm are tested in a case study with parameters derived from GPS trajectories of over 12,000 taxis in Shanghai, China for a study period of 5:30 - 11:30 am on a typical weekday. The optimal routing policy is compared with three heuristics based on simulated trajectories. Results show that the optimal routing policy improves average unit profit by 23.0% and 8.4% over the random walk and local hotspot heuristic respectively; and improves occupancy rate by 23.8% and 8.3% respectively. The improvement is larger during higher demand periods.
All Science Journal Classification (ASJC) codes
- Civil and Structural Engineering