PlumX Metrics
Embed PlumX Metrics

Deep Q Network Method for Dynamic Job Shop Scheduling Problem

Lecture Notes in Networks and Systems, ISSN: 2367-3389, Vol: 771 LNNS, Page: 137-155
2023
  • 2
    Citations
  • 0
    Usage
  • 4
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

Conference Paper Description

Nowadays, rule-based heuristic methods for scheduling planning in production environments are commonly used, but their effectiveness is heavily dependent on expert domain expertise. In this manner, decision-making performance cannot be assured, nor can the dynamic scheduling demand in the job-shop production environment be met. Therefore, Dynamic Job Shop Scheduling Problems (DJSSPs) have received increased interest from researchers in recent decades. However, the development of reinforcement learning (RL) approaches for solving DJSSPs has not been fully realized. In this paper, we used Deep Reinforcement Learning (DRL) approach on DJSSP to minimize the Makespan. A Deep Q Network (DQN) algorithm is designed with state features, actions, and rewards. Finally, the performance of the proposed solution is compared to other algorithms and benchmark research using two categories of benchmark instances. The empirical results show that the proposed DRL approach outperforms other DRL methods and dispatching rules (heuristics).

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know