Daily HRV measurements seems to be an important part of my QS adventures, mostly being used to assess ANS status and describe some states like overtraining, sickness, recovery. I'm utilizing contextual information + ANS balance to fine-tune my daily decisions.
Devices & Apps
There are some devices and apps to collect HRV, i've experimented with these setups
- Wearables like Oura or Fitbit with nightly average seems to have low sensitivity to short-term daily stressors (you may read Andrew Flatt twitter posts about nightly HRV or read my investigations 1, 2, 3). I look at nightly averages as marker of long-term health and dont use them for daily decisions, since i have more sensitive tool.
- Polar H10 + water + Kubios. The best choice for someone who want simplicity
- Polar H10 + water + Polar Equine for ECG Raw data. Raw data processed in Kubios HRV Standard. It's more complicated then just looking at 1 number from Kubios app, but you get your raw data and may cook it the way you want. Desktop Kubios provides more HRV parameters to analyse.
- Polar H10 + water + Polar Vantage V2 for Orthostatic test. No raw data and test takes 4 minutes. Lying 2 mins seems to be a loss of time (more details on topic) and after doing some time of 4 mins orthostatic test i've switched to only standing test, because it take 2 minutes (60s stabilization and 60 sec measurement) and seems to have good sensitivity to short-term stressors.
- Polar Verity Sense + Polar Sensor Logger for PPG raw data. Not practical since PPG too sensitive to movements. I wasnt able to get same values as H10 (worn simultaneously)
- Movesense MD seems to be the best option for EEG raw data (512 Hz sampling rate). I've found it by exploring Bruce Rogers blog. The signal quality is better than H10 and i've getting zero artifacts for measurements at rest (H10 sometimes have artifacts). Raw data can be processed in Kubios HRV or cooked with own processing (my choise).
Raw data cooking
My current preference is Movesense MD + Raw data at 512Hz. I have 2 minutes of ECG every day, artifact free and with nice ECG shape. To cook raw data a few steps will be needed:
- ECG: preprocess, clean, asess quality, detect R-peaks and calculate RR intervals
- RRi: find artifacts, correct them, detrend
- HRV: calculate some metrics (like rmssd) from trended / detrended data
There are ready to use libraries like Neurokit to do all of these steps. But some methodological choices should be made. I mostly follow Kubios approaches, since they one of the current HRV software leaders. Their methodology explained here and in their software manual. First of all we have to get raw ECG signal in array and then pass it to Neurokit functions.
ECG cleaning is done by using Pan-Tompkins algo (1985). Find peaks done by using Neurokit method. Fix artifacts to get clean RR signal is done by Kubios method. Sample frequency is 512 for Movesense MD and 130 for Polar H10.
- ecg_clean(ecg, sampling_rate=sample.freq, method="pantompkins1985")
- ecg_findpeaks(ecg.clean, sampling_rate=sample.freq, method = "neurokit)
- signal_fixpeaks(peaks, sampling_rate=sample.freq, method = "kubios", iterative=T, show=F)
If we only have RR intervals and dont have raw ECG, that's fine - just skip ecg_clean and ecg_findpeaks and pass RRi to signal_fixpeaks. H10 RR sample frequency should be set to 1000
Now we have cleaned RRi intervals and next step is to remove slow stationary trends (detrending). I prefer Smothness Priors with lambda = 500 as in Kubios. It's broken in Neurokit and i've found a fix for it:
- def signal_detrend_tarvainen2002(signal, regularization=500):
- N = len(signal)
- identity = np.eye(N)
- B = np.dot(np.ones((N, 1)), np.array([[1, -2, 1]]))
- D_2 = scipy.sparse.dia_matrix((B.T, [0, 1, 2]), shape=(N - 2, N)) # pylint: disable=E1101
- inv = np.linalg.inv(identity + regularization**2 * D_2.T @ D_2)
- z_stat = ((identity - inv)) @ signal
- trend = np.squeeze(np.asarray(signal - z_stat))
- return signal - trend
This is a Python code, i call it from RStudio because for now i'm R guy :) In a future i will learn python and may move some part of my R code to Python. Reticulate package allows to import python packages and call them from R.
After detrending being done, we can calculate some HRV parameters. HR should be calculated from trended data and most of other metrics should be calculated from detrended data. Neurokit offers most of HRV metrics being calculated with single function: hrv(peaks, sampling_rate=sample.freq)
At that step we have most of HRV metrics for our measurement. We can loop through each timestamp and call hrv() function with part of signal (60 secs window) and get time-varying HRV. So for now we got most of what Kubios does and even more (time-varying feature is for paid version, which costs ~$380 per year).
Exercise
We can use Movesense MD or Polar H10 during exercise to record data and analyze it after. During exercise we will get more artifacts and might need more aggressive artifacts detection. I've made a simple thresholding function which works in similar way as Kubios thresholds. You can read my comments or read Kubios documentation.
- hrv.peaks.threshold <- function(peaks, sample.freq, filter.threshold = .25, hr.rr.slope = -.005, median.filter.window = 7, f.min.hr = 40, f.max.hr = 200) {
- # adjust threshold - when hr increases threshold should be decreased and vise versa
- # http://www.kubios.com/downloads/Kubios_HRV_Users_Guide_3_1_0.pdf
- # 5.2.1 Artefact correction methods
- # That is, thresholds shown above are for 60 beats/min heart rate, but for higher heart rates the thresholds are smaller
- # (because the variability is expected to decrease when HR increases) and vice versa for lower heart rates.
- # glm <- lm(rr ~ hr,hrv.p); hr.rr.slope <- as.numeric(glm$coefficients["hr"])/1000; summary(glm)
- rrs <- data.frame(peaks = peaks, rr = c(diff(c(0,peaks))))
- rrs$rr.mf <- runmed(rrs$rr, median.filter.window) #calc rolling median RR
- rrs$hr <- sample.freq*60/(rrs$rr) #calc hr
- rrs$hr.mf <- sample.freq*60/(rrs$rr.mf) #calc rolling median HR
- rrs$rr.diff.rr.mf <- abs(rrs$rr - rrs$rr.mf) #abs diff for each RR between rolling median
- # adjusted threshold for bad rrs: K adj = (median RR + (60 - median HR) * slope)
- # the lower threshold means more aggressive correction
- # for example HR = 195: (60/195 + (195-60)*-.007) = -.15 => .25
- # for example HR = 160: (60/160 + (160-60)*-.007) = -.325 => .25
- # for example HR = 100: (60/100 + (100-60)*-.007) = 0.32
- # for example HR = 40: (60/40 + (40-60)*-.007) = 1.64
- # .25 is a medium threshold in kubios
- rrs$mf.th <- (rrs$rr.mf + (rrs$hr.mf-sample.freq*60)*hr.rr.slope) * filter.threshold
- # find HRs which bigger than specified
- hr.bad.i <- as.numeric(rownames(rrs[(rrs$hr <= f.min.hr) | (rrs$hr >= f.max.hr),]))
- # find RR which differ vs median more than threshold
- rr.bad.i <- as.numeric(rownames(rrs[rrs$rr.diff.rr.mf >= rrs$mf.th,]))
- bad.i <- unique(c(hr.bad.i, rr.bad.i))
- # replace bad HR / RRs with NA and interpolate
- rrs$rr.f <- rrs$rr; if(length(bad.i) > 0) rrs[c(bad.i),]$rr.f <- NA
- rrs$rr.fs <- na.spline(rrs$rr.f, method = "natural")
- rrs$peaks.clean <- rrs[1,"peaks"] - rrs[1,"rr.fs"] + cumsum(rrs$rr.fs)
- return(list(peaks.clean = rrs$peaks.clean, bad.i = bad.i, hr.bad.i = hr.bad.i, rr.bad.i = rr.bad.i))
- }
So, for exercise additional step is added at the end of processing. hrv.peaks.threshold(peaks, sample.freq) will do the job and filter more exercise induced artifacts.
Breathing Rate
We can also use ECG/RR intervals to calculate breathing rate. It's called ECG Derived Respiration (EDR) and it base on Respiratory Sinus Arrythmia (RSA) which influence ECG Amplitude / RR peaks. Kubios have this detection in their software and i find EDR as a useful metric, especially during exercise. Neurokit have a function to detect EDR so it's easy to add it at the end of processing framework. I made 3 functions - first is getting respiration signal from RR peaks, second is finding peaks and lows from respiration signal, third calculates breathing rate = (n.peaks + n.lows) / 2 for a period of 60 secs:
- hrv.edr.rsp.from.peaks <- function(peaks, len, sample.freq, algo = 'vangent2019') {
- ecg_rate = nk$ecg_rate(peaks, sampling_rate=sample.freq, desired_length=len, interpolation_method='monotone_cubic')
- ecg_rates <- data.frame(i = 1:length(ecg_rate), rate = ecg_rate)
- rsp <- nk$ecg_rsp(ecg_rate, sampling_rate=sample.freq, method=algo)
- rsp_rates <- data.frame(i = (1:length(rsp)), r = rsp)
- return(rsp_rates)
- }
- hrv.edr.rsp.find.peaks <- function(rsp_rates, sample.freq) {
- dist <- 60/100*sample.freq #assume respiration is not fastest ins/exp 0.5s or 100br/min
- rsp_rates$r <- detrend(rsp_rates$r)
- rsp_rates$r <- as.numeric(scale(rsp_rates$r))
- peaks <- pracma::findpeaks(rsp_rates$r, minpeakdistance = dist)
- mins <- pracma::findpeaks(-1*rsp_rates$r, minpeakdistance = dist)
- rrp <- rsp_rates[peaks[, 2], ]; rrm <- rsp_rates[mins[, 2], ]
- return(list(rrp = rrp, rrm = rrm))
- }
- hrv.edr.respiration.from.rsp <- function(rrp, rrm, rrs, sample.freq) {
- rrpm <- c(rrp,rrm)
- rsa <- length(rrpm)/2
- window <- rrs[which.max(rrs)] - rrs[which.min(rrs)]
- return(round(rsa * (60/(window/sample.freq)),2))
- }
Finally we get a breathing rate:
- rsp_rates <- hrv.edr.rsp.from.peaks(ecg.peaks.i,length(ecg.peaks.i$ECG_R_Peaks),sample.freq,'sarkar2015')
- rsp_peaks <- hrv.edr.rsp.find.peaks(rsp_rates,sample.freq)
- edr.rrp <- rsp_peaks[[1]]; edr.rrm <- rsp_peaks[[2]]; edr.rrs <- rsp_rates
- respiratory.rate <- hrv.edr.respiration.from.rsp(edr.rrp$i,edr.rrm$i,edr.rrs$i, sample.freq)
Ok, now we have worked out how to cook raw ECG into HRV metrics salted with ECG Derived Respiration. We can do that even if we only have RR intervals without raw ECG.
I will improve post in a future by adding github repo with code and some plotting.