Deep Reward Shaping from Demonstrations

Hussein, A. and Elyan, Eyad and Gaber, Mohamed Medhat and Jayne, C. (2017) Deep Reward Shaping from Demonstrations. 2017 International Joint Conference on Neural Networks (IJCNN). ISSN 2161-4407

[img]
Preview
Text
Deep Reward Shaping from Demonstrations.pdf - Accepted Version

Download (230kB)

Abstract

Deep reinforcement learning is rapidly gaining attention due to recent successes in a variety of problems. The combination of deep learning and reinforcement learning allows for a generic learning process that does not consider specific knowledge of the task. However, learning from scratch becomes more difficult when tasks involve long trajectories with delayed rewards. The chances of finding the rewards using trial and error become much smaller compared to tasks where the agent continuously interacts with the environment. This is the case in many real life applications which poses a limitation to current methods. In this paper we propose a novel method for combining learning from demonstrations and experience to expedite and improve deep reinforcement learning. Demonstrations from a teacher are used to shape a potential reward function by training a deep supervised convolutional neural network. The shaped function is added to the reward function used in deep-Q-learning (DQN) to perform off-policy training through trial and error. The proposed method is demonstrated on navigation tasks that are learned from raw pixels without utilizing any knowledge of the problem. Navigation tasks represent a typical AI problem that is relevant to many real applications and where only delayed rewards (usually terminal) are available to the agent. The results show that using the proposed shaped rewards significantly improves the performance of the agent over standard DQN. This improvement is more pronounced the sparser the rewards are.

Item Type: Article
Identification Number: https://doi.org/10.1109/IJCNN.2017.7965896
Dates:
DateEvent
3 July 2017Published Online
Uncontrolled Keywords: learning artificial intelligence training machine-learning navigation neural networks trajectory intelligent agents
Subjects: CAH11 - computing > CAH11-01 - computing > CAH11-01-01 - computer science
Divisions: Faculty of Computing, Engineering and the Built Environment
Faculty of Computing, Engineering and the Built Environment > School of Computing and Digital Technology
Depositing User: Ian Mcdonald
Date Deposited: 24 Mar 2017 12:47
Last Modified: 22 Mar 2023 12:01
URI: https://www.open-access.bcu.ac.uk/id/eprint/4140

Actions (login required)

View Item View Item

Research

In this section...