Measuring sleep is a complex task and the most accurate way of assessing it - getting good quality EEG signal and then use it to derive some sleep metrics (sleep stages, slow-wave amplitude etc). Additional signals (HR,HRV, Accelerometer etc) should be used to complement EEG and improve sleep staging accuracy.
Right now i have only 2 EEG devices. 3rd one is coming (OpenBCI).
Since only ZMax gives me raw data i will describe my approach with here. There are set of teaching videos on youtube how to score sleep from raw data. I went through my nights and scored them manually which was interesting to get a deep understanding on my sleep patterns. Here is an example night which i've scored manually (not perfect for sure, i'm not sleep specialist):
Here we can see raw EEG from both channels, XYZ accelerometer data, PPG, Noise, HR, Light and Body Temp. Scoring manually each night might be useful for understanding personal sleep patterns but seems to be impractical for long-term day-to-day usage. So i need automatic sleep scoring (like Dreem does) but the manufacturer's automatic sleep scoring is under paywall for $15 each night.
In that situation i've decided to use open-source library for sleep scoring made in Matthew Walker lab and published on github. This model was trained on PSG data and seems to have acceptable accuracy of ~85% compared to consensus of 5 sleep experts. Remember, 5 sleep experts have same ~85% agreement between themselves, so model seems to perform in sleep specialist subjectivity range.
I'm using RStudio with reticulate to use YASA for sleep scoring. There are few considerations which needs to be taken into account:
That's all i need for now. Lets see how to get this with RStudio + Python, only edf files with EEG are needed. This can be applied to any device, not only ZMax. I will use same approach for OpenBCI when i get it.
Some Python functions here to get 2 channels EEG load, get hypnogram and probabilities for channel and get SWS amplitudes:
And then some R code to get things done:
Here we got stages with probabilities for both channels and SWS amplitude. YASA also allows us to calculate spectrogram and do some cool plots (you may check them in github). That's all for now with code, lets look at some plots:
Here is agreement between 2 channels:
1 is a N3 (deep), 2 is N2+N1 (light), 4 is REM and 5 is Awake.
Agreement seems to be in acceptable range, since most of my sleep is on side there might be different pressure on electrodes vs pillow. Also F7 and F8 channels might have difference in signals due to be placed on different brain areas. Another thing is maybe sweat glands disturb signal at different timings for each side. Anyway it seems to be of good agreement on sleep stages (most of days i'm getting 85-95% agreement).
Lets build final hypnogram from YASA by choosing stage with biggest sum of probabilities.
Looks like a real hypnogram here! Let's look closer at my manual scoring, which i did before implementing auto-scoring with YASA:
Top - manual scoring. Bottom - YASA scoring
Hmm. It seems we agree pretty well with YASA and i'm not sure who is better here :) In situation like that i will pass sleep scoring job to YASA since i dont see it being worse or better than my manual scoring.
Last thing to have a look is SWS Amplitude which may represent degree of sleep depth:
Not too much to learn from these values right now, but i can clearly see that there is more noise on right channel - so i would check the electrode placing. First of all, I would need to build some dataset to look for day-to-day trends and learn more about what SWS Amp represents and what insights it can provide.
With post about ECG we get covered on 2 major signals. The next post will be about biofeedback with BrainBay. Also i will use both signals to quantife some standartized protocols - like Meditation, NBack, Psychomotor Vigilance Task, Anki.
In a future i will improve post. Feel free for comments and considerations.