PlumX Metrics
Embed PlumX Metrics

The flying sidekick traveling salesman problem with stochastic travel time: A reinforcement learning approach

Transportation Research Part E: Logistics and Transportation Review, ISSN: 1366-5545, Vol: 164, Page: 102816
2022
  • 47
    Citations
  • 0
    Usage
  • 57
    Captures
  • 1
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

  • Citations
    47
    • Citation Indexes
      47
  • Captures
    57
  • Mentions
    1
    • News Mentions
      1
      • 1

Most Recent News

Researchers from University of Tennessee Detail New Studies and Findings in the Area of Mathematics (The Flying Sidekick Traveling Salesman Problem With Stochastic Travel Time: a Reinforcement Learning Approach)

2023 MAY 16 (NewsRx) -- By a News Reporter-Staff News Editor at Math Daily News -- Data detailed on Mathematics have been presented. According to

Article Description

As a novel urban delivery approach, the coordinated operation of a truck–drone pair has gained increasing popularity, where the truck takes a traveling salesman route and the drone launches from the truck to deliver packages to nearby customers. Previous studies have referred to this problem as the flying sidekick traveling salesman problem (FSTSP) and have proposed numerous algorithms to solve it. However, few studies have considered the stochasticity of the travel time on the road network, mainly caused by traffic congestion, harsh weather conditions, etc, which heavily impacts the speed of the truck, thus affecting the drone’s operations and overall delivery routine. In this study, we extend the FSTSP with stochastic travel times and formulate the problem into a Markov decision process (MDP). The model is solved using reinforcement learning (RL) algorithms including the deep Q-network (DQN) and the Advantage Actor-Critic (A2C) algorithm to overcome the curse of dimensionality. Using an artificially generated dataset that was widely accepted as benchmarks in the literature, we show that the reinforcement learning algorithms also perform well as approximate optimization algorithms, outperforming a mixed integer programming (MIP) model and a local search heuristic algorithm on the original FSTSP without the stochastic travel time. On the FSTSP with stochastic travel time, the reinforcement learning algorithms obtain flexible policies that make dynamic decisions based on different traffic conditions on the roads, saving up to 28.65% on delivery time compared with the MIP model and a dynamic local search (DLS) algorithm. We also conduct a case study using real-time traffic data collected in a middle-sized city in the U.S. using Google Map API. Compared with a benchmark calculated by the DLS, the DRL approach saves 32.68% total delivery time in the case study, showing great potential for future practical adoption.

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know