Mobile Robot Path Planning using Q-Learning with Guided Distance

  • Authors

    • Ee Soong Low
    • Pauline Ong
    • Cheng Yee Low
    2018-11-30
    https://doi.org/10.14419/ijet.v7i4.27.22480
  • Guided distance, Mobile robot, Path planning, Q-learning, Reinforcement learning.
  • In path planning for mobile robot, classical Q-learning algorithm requires high iteration counts and longer time taken to achieve convergence. This is due to the beginning stage of classical Q-learning for path planning consists of mostly exploration, involving random direction decision making. This paper proposed the addition of distance aspect into direction decision making in Q-learning. This feature is used to reduce the time taken for the Q-learning to fully converge. In the meanwhile, random direction decision making is added and activated when mobile robot gets trapped in local optima. This strategy enables the mobile robot to escape from local optimal trap. The results show that the time taken for the improved Q-learning with distance guiding to converge is longer than the classical Q-learning. However, the total number of steps used is lower than the classical Q-learning.

     

  • References

    1. style='mso-bidi-font-size:8.0pt'>
    2. style='mso-spacerun:yes'> ADDIN EN.REFLIST
    3. field-separator'>[1] Hofner C, & Schmidt G, "Path planning and guidance techniques for an autonomous mobile cleaning robot." pp.610-617.

      [2] Aasen T, "Mobile cleaning robot for floors," Google Patents, 2005.

      [3] Sabattini L, Digani V, & Secchi C (2013), "Technological roadmap to boost the introduction of agvs in industrial applications." pp.203-208.

      [4] Song-hua Y (2008), “Trajectory tracking and control of logistics AGV [J],†Modular Machine Tool & Automatic Manufacturing Technique, vol. 6, pp. 022.

      [5] Bojarski M, Del Testa D & Dworakowski D (2010), “End to end learning for self-driving cars,†arXiv preprint arXiv:1604.07316, 2016.

      [6] S. Thrun, “Toward robotic cars,†Communications of the ACM, vol. 53, no. 4, pp. 99-106.

      [7] Häne C, Sattler T, & Pollefeys M, "Obstacle detection for self-driving cars using only monocular cameras and wheel odometry." pp. 5101-5108.

      [8] Liu X & Gong D (1992), "A comparative study of a-star algorithms for search and rescue in perfect maze." pp. 24-27.

      [9] Dean T, Basye K, & Shewchuk J (1992), “Reinforcement learning for planning and control,†Machine learning methods for planning, pp. 67-92.

      [10] Watkins CJ & Dayan P (1992), “Q-learning,†Machine learning, vol. 8, no. 3-4, pp. 279-292.

      [11] Xiao J, Michalewicz Z & Zhang L (1997), “Adaptive evolutionary planner/navigator for mobile robots,†IEEE transactions on evolutionary computation, vol. 1, no. 1, pp. 18-28.

      [12] Konar A, Chakraborty IG & Singh SJ (2013), “A deterministic improved Q-learning for path planning of a mobile robot,†IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 43, no. 5, pp. 1141-1153.

      [13] Konar A, Goswami I & Singh SJ (2013), “A deterministic improved q-learning for path planning of a mobile robot,†IEEE Trans. Systems, Man, and Cybernetics: Systems, vol. 43, no. 5, pp. 1141-1153.

      [14] Rakshit P, Konar K, Bhowmik P (2013), “Realization of an adaptive memetic algorithm using differential evolution and q-learning: a case study in multirobot path planning,†IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 43, no. 4, pp. 814-831.

      [15] Kim DH, Kim YJ & Kim KC (2000), “Vector field based path planning and petri-net based role selection mechanism with q-learning for the soccer robot system,†Intelligent Automation & Soft Computing, vol. 6, no. 1, pp. 75-87.

      [16] Chen C, Li HX, & Dong D (2008), “Hybrid control for robot navigation-a hierarchical q-learning algorithm,†IEEE Robotics & Automation Magazine, vol. 15, no. 2.

      [17] Xiao H, Liao L, & Zhou F, "Mobile robot path planning based on q-ann." pp. 2650-2654.

      [18] Guo M, Liu Y, and Malec J (2004), “A new Q-Learning algorithm based on the metropolis criterion,†IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 34, no. 5, pp. 2140-2143.

      [19] Das P, Behera H, & Panigrahi B (2016), “Intelligent-based multi-robot path planning inspired by improved classical q-learning and improved particle swarm optimization with perturbed Velocity,†Engineering Science and Technology, an International Journal, vol. 19, no. 1, pp. 651-669.

      [20] Juang CF, & Lu CM (2009), “Ant colony optimization incorporated with fuzzy q-learning for reinforcement fuzzy control,†IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 39, no. 3, pp. 597-608.

      [21] Muñoz P, Barco R, & de la Bandera I (2013), “Optimization of load balancing using fuzzy q-learning for next generation wireless networks,†Expert systems with applications, vol. 40, no. 4, pp. 984-994.

      [22] Khajenejad M, Afshinmanesh F, Marandi A, "Intelligent particle swarm optimization using Q-learning." pp. 7-12.

      [23] Rakshit P, Konar A & S. Das S, "ABC-TDQL: An adaptive memetic algorithm." pp. 35-42.

      [24] Li C, Zhang J, & Li Y, "Application of artificial neural network based on q-learning for mobile robot path planning." pp. 978-982.

      [25] Duguleana M, & Mogan G (2016), “Neural networks based reinforcement learning for mobile robots obstacle avoidance,†Expert Systems with Applications, vol. 62, pp. 104-115.

      [26] Huang BQ, Cao GY, & Guo M, "Reinforcement learning neural network to the problem of autonomous mobile robot obstacle avoidance." pp. 85-89.

      [27] Hwang KS, Tan SW, & Chen CC (2004), “Cooperative strategy based on adaptive q-learning for robot soccer systems,†IEEE Transactions on Fuzzy Systems, vol. 12, no. 4, pp. 569-576.

      [28] Park KH, Kim YJ & Kim JH (2001), “Modular Q-learning based multi-agent cooperation for robot soccer,†Robotics and Autonomous Systems, vol. 35, no. 2, pp. 109-122.

      [29] Yijing Z, Zheng Z & Xiaoyi Z, "Q learning algorithm based uav path learning and obstacle avoidence approach." pp. 3397-3402.

    4. mso-fareast-font-family:Batang;mso-ansi-language:EN-US;mso-fareast-language:
    5. KO;mso-bidi-language:AR-SA'>
  • Downloads

  • How to Cite

    Soong Low, E., Ong, P., & Yee Low, C. (2018). Mobile Robot Path Planning using Q-Learning with Guided Distance. International Journal of Engineering & Technology, 7(4.27), 57-62. https://doi.org/10.14419/ijet.v7i4.27.22480