Active continual learning for planning and navigation

Proc. ICML Workshop on Real World Experiment Design and Active Learning, 2020

Ahmed H Qureshi, Yinglong Miao, Michael C Yip

Abstract: Recent developments have led to imitation-based planners that learn by imitating expert demonstrations to solve general motion planning and navigation problems. These planners are known for their breakneck computational speed during online planning. However, training these methods offline requires a large number of expert demonstrations, which makes them impractical in cases where data is expensive to make and can come in streams on a need basis. For instance, in semi-autonomous driving, the demonstrations could only be provided on request for given planning problems. To address that challenge, we present an active continual learning approach that enables learning-based motion planners to learn from streaming data and actively ask for expert demonstrations when needed, drastically reducing the data required for training. Our results indicate that the proposed method consumes about 80% lesser data than traditional approaches while exhibiting comparable planning performances.

Qureshi et al. (2020) Active continual learning for planning and navigation, Proc. ICML Workshop on Real World Experiment Design and Active Learning, pp. 1-7.

Pub Link: http://realworldml.github.io/files/cr/33_acl_qureshi2020.pdf
arXiv: