Human trajectory prediction based on multi-sensor fusion
Date of Issue2019
School of Electrical and Electronic Engineering
Since the beginning of the 21st century, the robotic technology and the market of autonomous driving have been widely developed. It is expected that the autonomous vehicle can analyze the behavior of the pedestrian and predict their future trajectory in order to plan its own behavior safely and efficiently. This dissertation proposes an algorithm for predicting future human positions based on the historical positions. An unmanned ground vehicle is used as the platform that equipped with a stereo camera and a 3D LiDAR. The approach is divided by two steps: human coordinate extraction and future positions prediction. In the first step, the human coordinate model contains the human gravity coordinate and the depth information. On the one hand, the human gravity coordinate is built by calculating the average coordinate values of six key points which are gathered by implementing the pose estimation algorithm. On the other hand, the human depth information is acquired by averaging all the LiDAR depth values locating in the range of human torso. In the second step, the vector superposition method is used to predict the future positions of the pedestrian. In this experiment, a video dataset is collected which has several scenes of pedestrian movement in a first-person perspective. As a result, this dissertation builds a future position prediction system and a safety distance warning system, which shows satisfactory results in general pedestrian scenes.
DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics