Deep Imitation Learning for 3D Navigation Tasks
Hussein, Ahmed and Elyan, Eyad and Gaber, Mohamed Medhat and Jayne, Chrisina (2017) Deep Imitation Learning for 3D Navigation Tasks. Neural Computing and Applications. pp. 1-16. ISSN 0941-0643
Preview |
Text
Deep Imitation Learning for 3D Navigation Tasks.pdf - Accepted Version Download (645kB) |
Abstract
Deep learning techniques have shown success in learning from raw
high dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: Deep-Q-networks (DQN) and Asynchronous actor critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an e�ective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.
Item Type: | Article |
---|---|
Identification Number: | 10.1007/s00521-017-3241-z |
Dates: | Date Event 4 October 2017 Accepted 4 December 2017 Published Online |
Uncontrolled Keywords: | Deep Learning; Convolutional Neural Networks; Learning from Demonstrations; Reinforcement Learning; Active Learning; 3D Navigation; Benchmarking. |
Subjects: | CAH11 - computing > CAH11-01 - computing > CAH11-01-01 - computer science |
Divisions: | Faculty of Computing, Engineering and the Built Environment Faculty of Computing, Engineering and the Built Environment > College of Computing |
Depositing User: | Ian Mcdonald |
Date Deposited: | 09 Oct 2017 09:50 |
Last Modified: | 22 Mar 2023 12:01 |
URI: | https://www.open-access.bcu.ac.uk/id/eprint/5211 |
Actions (login required)
![]() |
View Item |