IDEAS home Printed from https://ideas.repec.org/a/eee/transe/v196y2025ics1366554525000493.html
   My bibliography  Save this article

Integrated reinforcement learning of automated guided vehicles dynamic path planning for smart logistics and operations

Author

Listed:
  • Ho, G.T.S.
  • Tang, Yuk Ming
  • Leung, Eric K.H.
  • Tong, P.H.

Abstract

Automated guided vehicles (AGV) play a critical role in fostering a smarter logistics and operations environment. Conventional path planning for AGVs enables the load-in-load-out of the items, but existing approaches rarely consider dynamic integrations with smart warehouses and factory systems. Therefore, this study presents a reinforcement learning (RL) approach for real-time path planning in automated guided vehicles within smart warehouses or smart factories. Unlike conventional path planning methods, which struggle to adapt to dynamic operational changes, the proposed algorithm integrates real-time information to enable responsive and flexible routing decisions. The novelty of this study lies in integrating AGV path planning and RL within a dynamic environment, such as a smart warehouse containing various workstations, charging stations, and storage locations. Through various scenarios in smart factory settings, this research demonstrates the algorithm’s effectiveness in handling complex logistics and operations environments. This research advances AGV technology by providing a scalable solution for dynamic path planning, enhancing efficiency in modern industrial systems.

Suggested Citation

  • Ho, G.T.S. & Tang, Yuk Ming & Leung, Eric K.H. & Tong, P.H., 2025. "Integrated reinforcement learning of automated guided vehicles dynamic path planning for smart logistics and operations," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 196(C).
  • Handle: RePEc:eee:transe:v:196:y:2025:i:c:s1366554525000493
    DOI: 10.1016/j.tre.2025.104008
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S1366554525000493
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.tre.2025.104008?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    --->

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Basso, Rafael & Kulcsár, Balázs & Sanchez-Diaz, Ivan & Qu, Xiaobo, 2022. "Dynamic stochastic electric vehicle routing with safe reinforcement learning," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 157(C).
    2. Zhen Shi & Keyin Wang & Jianhui Zhang, 2023. "Improved reinforcement learning path planning algorithm integrating prior knowledge," PLOS ONE, Public Library of Science, vol. 18(5), pages 1-11, May.
    3. Hokey Min, 2023. "Smart Warehousing as a Wave of the Future," Logistics, MDPI, vol. 7(2), pages 1-12, May.
    4. Xie, Jiaohong & Yang, Zhenyu & Lai, Xiongfei & Liu, Yang & Yang, Xiao Bo & Teng, Teck-Hou & Tham, Chen-Khong, 2022. "Deep reinforcement learning for dynamic incident-responsive traffic information dissemination," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 166(C).
    5. Jianming Cai & Xiaokang Li & Yue Liang & Shan Ouyang, 2021. "Collaborative Optimization of Storage Location Assignment and Path Planning in Robotic Mobile Fulfillment Systems," Sustainability, MDPI, vol. 13(10), pages 1-26, May.
    6. Chung, Sai-Ho, 2021. "Applications of smart technologies in logistics and transport: A review," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 153(C).
    7. Shen, Lixin & Wang, Yaodong & Liu, Kunpeng & Yang, Zaili & Shi, Xiaowen & Yang, Xu & Jing, Ke, 2020. "Synergistic path planning of multi-UAVs for air pollution detection of ships in ports," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 144(C).
    8. Li, Kunpeng & Liu, Tengbo & Ram Kumar, P.N. & Han, Xuefang, 2024. "A reinforcement learning-based hyper-heuristic for AGV task assignment and route planning in parts-to-picker warehouses," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 185(C).
    9. Chen, Xinwei & Ulmer, Marlin W. & Thomas, Barrett W., 2022. "Deep Q-learning for same-day delivery with vehicles and drones," European Journal of Operational Research, Elsevier, vol. 298(3), pages 939-952.
    10. Sven Winkelhaus & Eric H. Grosse, 2020. "Logistics 4.0: a systematic review towards a new logistics system," International Journal of Production Research, Taylor & Francis Journals, vol. 58(1), pages 18-43, January.
    11. Wang, Dujuan & Wang, Qi & Yin, Yunqiang & Cheng, T.C.E., 2023. "Optimization of ride-sharing with passenger transfer via deep reinforcement learning," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 172(C).
    12. Firdausiyah, N. & Taniguchi, E. & Qureshi, A.G., 2019. "Modeling city logistics using adaptive dynamic programming based multi-agent simulation," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 125(C), pages 74-96.
    13. Yan, Yimo & Chow, Andy H.F. & Ho, Chin Pang & Kuo, Yong-Hong & Wu, Qihao & Ying, Chengshuo, 2022. "Reinforcement learning for logistics and supply chain management: Methodologies, state of the art, and future opportunities," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 162(C).
    14. Winkelhaus, S. & Grosse, E. H., 2020. "Logistics 4.0: a systematic review towards a new logistics system," Publications of Darmstadt Technical University, Institute for Business Studies (BWL) 118539, Darmstadt Technical University, Department of Business Administration, Economics and Law, Institute for Business Studies (BWL).
    15. Yan, Yimo & Deng, Yang & Cui, Songyi & Kuo, Yong-Hong & Chow, Andy H.F. & Ying, Chengshuo, 2023. "A policy gradient approach to solving dynamic assignment problem for on-site service delivery," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 178(C).
    16. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    17. Zengliang Han & Dongqing Wang & Feng Liu & Zhiyong Zhao, 2017. "Multi-AGV path planning with double-path constraints by using an improved genetic algorithm," PLOS ONE, Public Library of Science, vol. 12(7), pages 1-16, July.
    18. Ivanov, Dmitry & Dolgui, Alexandre & Sokolov, Boris, 2022. "Cloud supply chain: Integrating Industry 4.0 and digital platforms in the “Supply Chain-as-a-Service”," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 160(C).
    19. Andrew Kusiak, 2017. "Smart manufacturing must embrace big data," Nature, Nature, vol. 544(7648), pages 23-25, April.
    20. Antonio Falcó & Lucía Hilario & Nicolás Montés & Marta C. Mora & Enrique Nadal, 2020. "A Path Planning Algorithm for a Dynamic Environment Based on Proper Generalized Decomposition," Mathematics, MDPI, vol. 8(12), pages 1-11, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Li, Meng & Cai, Kaiquan & Zhao, Peng, 2025. "Optimizing same-day delivery with vehicles and drones: A hierarchical deep reinforcement learning approach," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 193(C).
    2. Liu, Zeyu & Li, Xueping & Khojandi, Anahita, 2022. "The flying sidekick traveling salesman problem with stochastic travel time: A reinforcement learning approach," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 164(C).
    3. Ivanov, Dmitry & Dolgui, Alexandre & Sokolov, Boris, 2022. "Cloud supply chain: Integrating Industry 4.0 and digital platforms in the “Supply Chain-as-a-Service”," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 160(C).
    4. He, Xinyu & He, Fang & Li, Lishuai & Zhang, Lei & Xiao, Gang, 2022. "A route network planning method for urban air delivery," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 166(C).
    5. Menti, Federica & Romero, David & Jacobsen, Peter, 2023. "A technology assessment and implementation model for evaluating socio-cultural and technical factors for the successful deployment of Logistics 4.0 technologies," Technological Forecasting and Social Change, Elsevier, vol. 190(C).
    6. Pourvaziri, H. & Sarhadi, H. & Azad, N. & Afshari, H. & Taghavi, M., 2024. "Planning of electric vehicle charging stations: An integrated deep learning and queueing theory approach," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 186(C).
    7. Kabadurmus, Ozgur & Kayikci, Yaşanur & Demir, Sercan & Koc, Basar, 2023. "A data-driven decision support system with smart packaging in grocery store supply chains during outbreaks," Socio-Economic Planning Sciences, Elsevier, vol. 85(C).
    8. Ninja Soeffker & Marlin W. Ulmer & Dirk C. Mattfeld, 2024. "Balancing resources for dynamic vehicle routing with stochastic customer requests," OR Spectrum: Quantitative Approaches in Management, Springer;Gesellschaft für Operations Research e.V., vol. 46(2), pages 331-373, June.
    9. Ding, Yida & Wandelt, Sebastian & Wu, Guohua & Xu, Yifan & Sun, Xiaoqian, 2023. "Towards efficient airline disruption recovery with reinforcement learning," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 179(C).
    10. Burgos, Diana & Ivanov, Dmitry, 2021. "Food retail supply chain resilience and the COVID-19 pandemic: A digital twin-based impact analysis and improvement directions," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 152(C).
    11. Cui, Huixia & Chen, Xiangyong & Guo, Ming & Jiao, Yang & Cao, Jinde & Qiu, Jianlong, 2023. "A distribution center location optimization model based on minimizing operating costs under uncertain demand with logistics node capacity scalability," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 610(C).
    12. Brauner, Philipp & Ziefle, Martina, 2022. "Beyond playful learning – Serious games for the human-centric digital transformation of production and a design process model," Technology in Society, Elsevier, vol. 71(C).
    13. Quy Ta-Dinh & Tu-San Pham & Minh Hoàng Hà & Louis-Martin Rousseau, 2024. "A reinforcement learning approach for the online dynamic home health care scheduling problem," Health Care Management Science, Springer, vol. 27(4), pages 650-664, December.
    14. Anna Saniuk, 2022. "The Logistics 4.0 Implementation Supported by the Balanced Scorecard Method," European Research Studies Journal, European Research Studies Journal, vol. 0(1), pages 198-207.
    15. Ranasinghe, Thilini & Grosse, Eric H. & Glock, Christoph H. & Jaber, Mohamad Y., 2024. "Never too late to learn: Unlocking the potential of aging workforce in manufacturing and service industries," International Journal of Production Economics, Elsevier, vol. 270(C).
    16. Helo, Petri & Thai, Vinh V., 2024. "Logistics 4.0 – digital transformation with smart connected tracking and tracing devices," International Journal of Production Economics, Elsevier, vol. 275(C).
    17. Ying, Chengshuo & Chow, Andy H.F. & Yan, Yimo & Kuo, Yong-Hong & Wang, Shouyang, 2024. "Adaptive rescheduling of rail transit services with short-turnings under disruptions via a multi-agent deep reinforcement learning approach," Transportation Research Part B: Methodological, Elsevier, vol. 188(C).
    18. Behl, Abhishek & Sampat, Brinda & Gaur, Jighyasu & Pereira, Vijay & Laker, Benjamin & Shankar, Amit & Shi, Yangyan & Roohanifar, Mohammad, 2024. "Can gamification help green supply chain management firms achieve sustainable results in servitized ecosystem? An empirical investigation," Technovation, Elsevier, vol. 129(C).
    19. Guo, Feng & Wei, Qu & Wang, Miao & Guo, Zhaoxia & Wallace, Stein W., 2023. "Deep attention models with dimension-reduction and gate mechanisms for solving practical time-dependent vehicle routing problems," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 173(C).
    20. Chen, Yi-Ting & Sun, Edward W. & Chang, Ming-Feng & Lin, Yi-Bing, 2021. "Pragmatic real-time logistics management with traffic IoT infrastructure: Big data predictive analytics of freight travel time for Logistics 4.0," International Journal of Production Economics, Elsevier, vol. 238(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:transe:v:196:y:2025:i:c:s1366554525000493. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/600244/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.