AbstractsComputer Science

Advanced Motion Mode Recognition for Portable Navigation

by Mostafa Elhoushi




Institution: Queen's University
Department: Electrical & Computer Engineering
Degree: PhD
Year: 2015
Keywords: Pattern Recognition; Sensor Fusion; Inertial Navigation; Activity Recognition; Portable Navigation; Machine Learning
Record ID: 2058451
Full text PDF: http://qspace.library.queensu.ca/bitstream/1974/12790/1/Elhoushi_Mostafa_M_201503_PhD.pdf


Abstract

Portable navigation is increasingly becoming an essential part of our lives. Knowing a person’s position and velocity using a portable device, such as a smartphone, tablet, smartwatch, or smartglasses, has become a basic part for many people’s lives. However, portable navigation is still facing many challenges: mainly navigation in urban canyons or indoors where Global Navigation Satellite System (GNSS) signals are weak or unavailable, and the need to compromise the accuracy and precision of inertial sensors in return for limiting their cost and size to place in portable consumer devices. As a result, portable navigation needs to employ different algorithms for each mode of motion, to assure optimal results in each case. This thesis’ contribution is to provide a motion mode recognition module that aims to detect a wide range of motion modes using micro-electro-mechanical sensors (MEMS) within a portable navigation device, which is robust to the device usage, and device orientation. The proposed motion mode recognition module works with good accuracy in the absence of absolute navigational signals (such as GNSS or WiFi). The motion modes detected are: stationary, walking, running, cycling, in a land-based vessel, walking in a land-based vessel, standing on a moving walkway, walking on a moving walkway, moving on stairs, taking elevator, standing on an escalator, and walking on an escalator. The motion mode recognition module involves the following steps: data input reading, pre-processing, feature extraction, classification, and post-classification refining techniques. During the data input reading step, signal readings from a group of sensors, are simply obtained. In the second step, pre-processing, such raw signals are processed and fused to obtain more meaningful variables. Feature extraction involves grouping the variables into a window, and extracting a group of features from that window. Classification is the final step where the feature vector is fed to predict the motion mode. To train and evaluate the classifier models, more than 2400 tests were conducted using various portable navigation devices, such as smartphones, tablets, smartwatches, and head mounted systems. The training tests involved more than 35 users of various genders, weights, heights, and ages, and covering various device usages and orientations.