Robust remote heart rate estimation from multiple asynchronous noisy channels using autoregressive model with Kalman filter
Remote heart rate measurement has many powerful applications, such as measuring stress in the workplace, and the analysis of the impact of cognitive tasks on breathing and heart rate variability (HRV). Although many methods are available to measure heart rate remotely from face videos, most of them...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Published: |
Elsevier
2019
|
Subjects: | |
Online Access: | http://eprints.um.edu.my/19953/ https://doi.org/10.1016/j.bspc.2018.09.007 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Malaya |
Summary: | Remote heart rate measurement has many powerful applications, such as measuring stress in the workplace, and the analysis of the impact of cognitive tasks on breathing and heart rate variability (HRV). Although many methods are available to measure heart rate remotely from face videos, most of them only work well on stationary subjects under well-controlled conditions, and their performance significantly degrades under subject's motions and illumination variation. We propose a novel algorithm to estimate heart rate. Also, it can differentiate between a photo of a human face and an actual human face meaning that it can detect false signals and skip them. The method obtains ROIs using facial landmarks, then it rectifies illumination based on Normalized Least Mean Square (NLMS) adaptive filter and eliminates non-rigid motions based on standard deviation of fixed length of the signal's segments. The method employs the RADICAL technique to extract independent subcomponents. The heart rate measures for each subcomponent, are estimated by analysis of frequency signal to find the one with the highest magnitude. A two-steps data fusion method is also introduced to combine current and previous measured heart rates to calculate a more accurate result. In this paper, we explore the potential of our algorithm on two self-collected, and DEAP databases. The results of three experiments demonstrate that our algorithm substantially outperforms all previous methods. Moreover, we investigate the behavior of our algorithm under challenging conditions including the subject's motions and illumination variation, which shows that our algorithm can reduce the influences of illumination interference and rigid motions significantly. Also, it indicates that our algorithm can be used for the online environment. Finally, the application of our algorithm in search and rescue scenarios using drones is considered and an experiment is conducted to investigate the algorithm's potential to be embedded in drones. |
---|