Deep Reinforcement Learning for Dynamic Twin Automated Stacking Cranes Scheduling Problem
Electronics (Switzerland), ISSN: 2079-9292, Vol: 12, Issue: 15
2023
- 1Citations
- 4Captures
- 2Mentions
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Most Recent Blog
Electronics, Vol. 12, Pages 3288: Deep Reinforcement Learning for Dynamic Twin Automated Stacking Cranes Scheduling Problem
Electronics, Vol. 12, Pages 3288: Deep Reinforcement Learning for Dynamic Twin Automated Stacking Cranes Scheduling Problem Electronics doi: 10.3390/electronics12153288 Authors: Xin Jin Nan Mi Wen
Most Recent News
Studies from Shandong University Yield New Data on Electronics (Deep Reinforcement Learning for Dynamic Twin Automated Stacking Cranes Scheduling Problem)
2023 AUG 10 (NewsRx) -- By a News Reporter-Staff News Editor at Electronics Daily -- Researchers detail new data in electronics. According to news reporting
Article Description
Effective dynamic scheduling of twin Automated Stacking Cranes (ASCs) is essential for improving the efficiency of automated storage yards. While Deep Reinforcement Learning (DRL) has shown promise in a variety of scheduling problems, the dynamic twin ASCs scheduling problem is challenging owing to its unique attributes, including the dynamic arrival of containers, sequence-dependent setup and potential ASC interference. A novel DRL method is proposed in this paper to minimize the ASC run time and traffic congestion in the yard. Considering the information interference from ineligible containers, dynamic masked self-attention (DMA) is designed to capture the location-related relationship between containers. Additionally, we propose local information complementary attention (LICA) to supplement congestion-related information for decision making. The embeddings grasped by the LICA-DMA neural architecture can effectively represent the system state. Extensive experiments show that the agent can learn high-quality scheduling policies. Compared with rule-based heuristics, the learned policies have significantly better performance with reasonable time costs. The policies also exhibit impressive generalization ability in unseen scenarios with various scales or distributions.
Bibliographic Details
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know