脑功能吧 关注:27贴子:198
  • 0回复贴,共1

多导睡眠图(PSG)数据的睡眠阶段分类

只看楼主收藏回复

睡眠阶段识别,也称为睡眠评分或睡眠阶段分类,对于更好地了解睡眠脑电具有重要意义。事实上,睡眠图的构建,即睡眠阶段序列,通常作为一种初步检查被用于诊断睡眠障碍,例如失眠或睡眠呼吸暂停。该检查一般是按如下方式进行:首先,使用多导睡眠图(PSG)记录被试头部不同位置的脑电图(EEG)信号、眼电图(EOG)信号、肌电图(EMG)信号等。其次,由睡眠专家观察夜间睡眠记录的不同时间序列,并按照美国睡眠医学会(AASM)规则或 Rechtschaffen和Kales(RK)规则等参考命名法,为每30s时间段分配一个睡眠阶段。
根据AASM规则,确定了5个睡眠阶段:唤醒(W)、快速眼动(REM)、非快速眼动(N1)、非快速眼动(N2)和非快速眼动(N3,也称为慢波睡眠甚至深度睡眠)。【关于睡眠EEG波形更详细地描述和分类比较可参见此文:干货分享 | EEG波形判别上手指南】它们的特征是具有不同的时间和频率模式,并且在整晚睡眠阶段所占的比例也不同。例如,像N1这样的过渡阶段的频率低于REM或N2。在AASM规则下,还记录了两个不同阶段之间的过渡情况,并且可能会调节睡眠评分者的最终决定。然而实际上,某些过渡阶段或转换过程终止,以及转换被加强等情况,这些取决于某些事件的发生,例如关于N1-N2过渡阶段的唤醒、K-复合波或纺锤波【点此查看纺锤波的更多信息→纺锤波:EEG中纺锤波参数分析和检测框架,并应用于睡眠纺锤波】。尽管通过睡眠专家的检查可以收集非常宝贵的信息,但睡眠评估是一项繁琐且耗时的任务,而且还受制于评分者的主观性和可变性。
自动睡眠评分方法引起了许多研究人员的兴趣。从统计机器学习的角度来看,该问题是一个不平衡的多类预测问题。最先进的自动方法可以分为两类,这取决于用于分类的特征是使用专家知识提取的,还是从原始信号中学习的。第一类方法依赖于有关信号和事件的先验知识。第二类方法包括从转换后的数据或直接来自卷积神经网络的原始数据中学习适当的特征表征。最近,提出了一种使用对抗性深度神经网络对脑电信号进行睡眠阶段分类的方法。
统计机器学习主要的挑战之一是分类任务的不平衡性质,但是这对于应用过程具有重要的实际意义。一般来说,与N2阶段相比,像N1这样的睡眠阶段通常是比较少见的。当学习一个具有非常不平衡类的预测算法时,通常会发生的情况是系统往往不会预测最稀少的类。解决此问题的一种方法是重加权模型的损失函数。与神经网络中使用的在线训练方法一样,利用平衡采样向网络提供批量数据,其中包含每个类的尽可能多的数据点。统计学习的另一个挑战与处理过渡阶段或转换规则的方式有关。实际上,由于该过程可能会影响评分者的最终决定,因此预测模型可能会考虑这一点以提高其表现。这可以通过向最终分类器提供来自相邻时间段的特征来实现。这就是所谓的睡眠阶段分类。
本文使用端到端深度学习方法,使用多元时间序列来进行时间睡眠阶段分类。并使用来自给定Sleep Physionet数据集中的两名被试,即Alice和Bob(因演示需求所取的名字),本文将阐述如何从Alice的数据中预测Bob的睡眠阶段。该问题可被作为有监督的多类分类任务来解决,目标是从5个可能的阶段预测每30s数据块的睡眠阶段。基于Python工具包MNE进行分析。
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets.sleep_physionet.age import fetch_data
from mne.time_frequency import psd_welch
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
加载数据
MNE-Python提供了mne.datasets.sleep_physionet.age.fetch_data(),能够方便地从Sleep Physionet数据集中下载数据。给定被试和记录列表,fetcher下载数据并为每个被试提供一对文件:
-PSG.edf包含多导睡眠图。
-Hypnogram.edf包含由专家记录的注释。
将这两者结合在一个mne.io.Raw对象中,然后根据注释的描述提取事件(events)以获得epochs。
读取PSG数据和睡眠图以创建一个原始对象
ALICE, BOB = 0, 1
[alice_files, bob_files] = fetch_data(subjects=[ALICE, BOB], recording=[1])
raw_train = mne.io.read_raw_edf(alice_files[0], stim_channel='Event marker',
misc=['Temp rectal'])
annot_train = mne.read_annotations(alice_files[1])
raw_train.set_annotations(annot_train, emit_warning=False)
# plot some data
# scalings were chosen manually to allow for simultaneous visualization of
# different channel types in this specific dataset
raw_train.plot(start=60, duration=60,
scalings=dict(eeg=1e-4, resp=1e3, eog=1e-4, emg=1e-7,
misc=1e-1))

Using default location ~/mne_data for PHYSIONET_SLEEP...
Extracting EDF parameters from /home/circleci/mne_data/physionet-sleep-data/SC4001E0-PSG.edf...
EDF file detected
Setting channel info structure...
Creating raw.info structure...
从注释中提取30s的事件
Sleep Physionet数据集使用8个标签进行注释:Wake(W)、Stage1、Stage 2、Stage 3、Stage 4,对应于从轻度睡眠到深度睡眠的范围、REM睡眠(R),其中REM是快速眼动睡眠的缩写,运动(M)和没有记分的阶段(?)。本文将只使用5个阶段:Wake(W)、Stage 1、Stage 2、Stage 3/4、REM睡眠(R)。为此,使用event_id参数in mne.events_from_annotations()来选择我们感兴趣的事件,并为每个事件关联一个事件标识符。
此外,这些记录包含每晚前后的长时间Wake(W)区域。为了限制类不平衡带来的影响,只保留第一个W时间段出现的前30min到最后一个睡眠阶段出现的后30min来修剪每个记录数据。
annotation_desc_2_event_id = {'Sleep stage W': 1,
'Sleep stage 1': 2,
'Sleep stage 2': 3,
'Sleep stage 3': 4,
'Sleep stage 4': 4,
'Sleep stage R': 5}
# keep last 30-min wake events before sleep and first 30-min wake events after
# sleep and redefine annotations on raw data
annot_train.crop(annot_train[1]['onset'] - 30 * 60,
annot_train[-2]['onset'] + 30 * 60)
raw_train.set_annotations(annot_train, emit_warning=False)
events_train, _ = mne.events_from_annotations(
raw_train, event_id=annotation_desc_2_event_id, chunk_duration=30.)
# create a new event_id that unifies stages 3 and 4
event_id = {'Sleep stage W': 1,
'Sleep stage 1': 2,
'Sleep stage 2': 3,
'Sleep stage 3/4': 4,
'Sleep stage R': 5}
# plot events
fig = mne.viz.plot_events(events_train, event_id=event_id,
sfreq=raw_train.info['sfreq'],
first_samp=events_train[0, 0])
# keep the color-code for further plotting
stage_colors = plt.rcParams['axes.prop_cycle'].by_key()['color']

OUT:
Used Annotations descriptions: ['Sleep stage 1', 'Sleep stage 2', 'Sleep stage 3', 'Sleep stage 4', 'Sleep stage R', 'Sleep stage W']
根据事件创建Epochs
tmax = 30. - 1. / raw_train.info['sfreq'] # tmax in included
epochs_train = mne.Epochs(raw=raw_train, events=events_train,
event_id=event_id, tmin=0., tmax=tmax, baseline=None)
print(epochs_train)
OUT:
Not setting metadata
841 matching events found
No baseline correction applied
0 projection items activated
<Epochs | 841 events (good & bad), 0 - 29.99 sec, baseline off, ~12 kB, data not loaded,
'Sleep stage W': 188
'Sleep stage 1': 58
'Sleep stage 2': 250
'Sleep stage 3/4': 220
'Sleep stage R': 125>
应用相同的步骤,加载Bob的数据(测试数据)
raw_test = mne.io.read_raw_edf(bob_files[0], stim_channel='Event marker',
misc=['Temp rectal'])
annot_test = mne.read_annotations(bob_files[1])
annot_test.crop(annot_test[1]['onset'] - 30 * 60,
annot_test[-2]['onset'] + 30 * 60)
raw_test.set_annotations(annot_test, emit_warning=False)
events_test, _ = mne.events_from_annotations(
raw_test, event_id=annotation_desc_2_event_id, chunk_duration=30.)
epochs_test = mne.Epochs(raw=raw_test, events=events_test, event_id=event_id,
tmin=0., tmax=tmax, baseline=None)
print(epochs_test)
OUT:
Extracting EDF parameters from /home/circleci/mne_data/physionet-sleep-data/SC4011E0-PSG.edf...
EDF file detected
Setting channel info structure...
Creating raw.info structure...
Used Annotations descriptions: ['Sleep stage 1', 'Sleep stage 2', 'Sleep stage 3', 'Sleep stage 4', 'Sleep stage R', 'Sleep stage W']
Not setting metadata
1103 matching events found
No baseline correction applied
0 projection items activated
<Epochs | 1103 events (good & bad), 0 - 29.99 sec, baseline off, ~12 kB, data not loaded,
'Sleep stage W': 157
'Sleep stage 1': 109
'Sleep stage 2': 562
'Sleep stage 3/4': 105
'Sleep stage R': 170>
特征工程
观察不同睡眠阶段的各个epochs的功率谱密度图(PSD),可以看到不同睡眠阶段具有不同的特征。这些特征在Alice和Bob的数据之间是相似的。接下来,本节将根据特定频段中的相对功率来创建EEG特征,以捕获数据中睡眠阶段之间的这种差异。
# visualize Alice vs. Bob PSD by sleep stage.
fig, (ax1, ax2) = plt.subplots(ncols=2)
# iterate over the subjects
stages = sorted(event_id.keys())
for ax, title, epochs in zip([ax1, ax2],
['Alice', 'Bob'],
[epochs_train, epochs_test]):
for stage, color in zip(stages, stage_colors):
epochs[stage].plot_psd(area_mode=None, color=color, ax=ax,
fmin=0.1, fmax=20., show=False,
average=True, spatial_colors=False)
ax.set(title=title, xlabel='Frequency (Hz)')
ax2.set(ylabel='µV^2/Hz (dB)')
ax2.legend(ax2.lines[2::3], stages)
plt.show()

OUT:
Loading data for 58 events and 3000 original time points ...
0 bad epochs dropped
Using multitaper spectrum estimation with 7 DPSS windows
Loading data for 250 events and 3000 original time points ...
0 bad epochs dropped
Using multitaper spectrum estimation with 7 DPSS windows
Loading data for 220 events and 3000 original time points ...
0 bad epochs dropped
Using multitaper spectrum estimation with 7 DPSS windows
Loading data for 125 events and 3000 original time points ...
0 bad epochs dropped
Using multitaper spectrum estimation with 7 DPSS windows
Loading data for 188 events and 3000 original time points ...
0 bad epochs dropped
Using multitaper spectrum estimation with 7 DPSS windows
Loading data for 109 events and 3000 original time points ...
0 bad epochs dropped
Using multitaper spectrum estimation with 7 DPSS windows
Loading data for 562 events and 3000 original time points ...
0 bad epochs dropped
Using multitaper spectrum estimation with 7 DPSS windows
Loading data for 105 events and 3000 original time points ...
0 bad epochs dropped
Using multitaper spectrum estimation with 7 DPSS windows
Loading data for 170 events and 3000 original time points ...
0 bad epochs dropped
Using multitaper spectrum estimation with 7 DPSS windows
Loading data for 157 events and 3000 original time points ...
0 bad epochs dropped
Using multitaper spectrum estimation with 7 DPSS windows
利用Python函数设计scikit-learn转换器
根据特定频段的相对功率来创建一个函数,以提取EEG特征,从而能够从EEG信号中预测睡眠阶段。
def eeg_power_band(epochs):
"""EEG relative power band feature extraction.
This function takes an ``mne.Epochs`` object and creates EEG features based
on relative power in specific frequency bands that are compatible with
scikit-learn.
Parameters
----------
epochs : Epochs
The data.
Returns
-------
X : numpy array of shape [n_samples, 5]
Transformed data.
"""
# specific frequency bands
FREQ_BANDS = {"delta": [0.5, 4.5],
"theta": [4.5, 8.5],
"alpha": [8.5, 11.5],
"sigma": [11.5, 15.5],
"beta": [15.5, 30]}
psds, freqs = psd_welch(epochs, picks='eeg', fmin=0.5, fmax=30.)
# Normalize the PSDs
psds /= np.sum(psds, axis=-1, keepdims=True)
X = []
for fmin, fmax in FREQ_BANDS.values():
psds_band = psds[:, :, (freqs >= fmin) & (freqs < fmax)].mean(axis=-1)
X.append(psds_band.reshape(len(psds), -1))
return np.concatenate(X, axis=1)
使用scikit-learn的多分类工作流
为了解决如何从Alice的数据中预测Bob的睡眠阶段,并尽可能避免使用样板代码的问题,接下来将利用到sckit-learn的两个关键特性:Pipeline和FunctionTransformer。Scikit-learn的pipeline将估计器组合为一系列转换和最终估计器,而FunctionTransformer将python函数转换为估计器兼容对象。通过这种方式,可以创建以mne.Epochs为参数的scikit-learn估计器。
pipe = make_pipeline(FunctionTransformer(eeg_power_band, validate=False),
RandomForestClassifier(n_estimators=100, random_state=42))
# Train
y_train = epochs_train.events[:, 2]
pipe.fit(epochs_train, y_train)
# Test
y_pred = pipe.predict(epochs_test)
# Assess the results
y_test = epochs_test.events[:, 2]
acc = accuracy_score(y_test, y_pred)
print("Accuracy score: {}".format(acc))
OUT:
Loading data for 841 events and 3000 original time points ...
0 bad epochs dropped
Effective window size : 2.560 (s)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 1.2s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 1.2s finished
Loading data for 1103 events and 3000 original time points ...
0 bad epochs dropped
Effective window size : 2.560 (s)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 1.6s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 1.6s finished
Accuracy score: 0.641885766092475
从输出结果可知,可以根据Alice的数据预测Bob的睡眠阶段,且预测精度为64.2%。
进一步分析数据
查看混淆矩阵或分类报告。
print(confusion_matrix(y_test, y_pred))
OUT:
[[156 0 1 0 0]
[ 80 4 7 2 16]
[ 85 17 369 33 58]
[ 0 0 5 100 0]
[ 54 3 34 0 79]]
print(classification_report(y_test, y_pred, target_names=event_id.keys()))
OUT:
precision recall f1-score support
Sleep stage W 0.42 0.99 0.59 157
Sleep stage 1 0.17 0.04 0.06 109
Sleep stage 2 0.89 0.66 0.75 562
Sleep stage 3/4 0.74 0.95 0.83 105
Sleep stage R 0.52 0.46 0.49 170
accuracy 0.64 1103
macro avg 0.55 0.62 0.54 1103
weighted avg 0.68 0.64 0.63 1103
从该分类报告中可以看到,每个睡眠阶段的精度、召回率、F1值,以及用于训练的测试样本数量。例如,快速眼动睡眠阶段(R)的精度为52%,召回率为46%,F1值为0.49,用于训练的样本为170,总样本数为1103。
参考来源:
Stanislas Chambon, Mathieu N. Galtier, Pierrick J. Arnal, Gilles Wainrib, and Alexandre Gramfort. A deep learning architecture for temporal sleep stage classification using multivariate and multimodal time series. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 26(4):758–769, 2018. doi:10.1109/TNSRE.2018.2813138.
B. Kemp, A. H. Zwinderman, B. Tuk, H. A. C. Kamphuisen, and J. J. L. Oberyé. Analysis of a sleep-dependent neuronal feedback loop: the slow-wave microcontinuity of the EEG. IEEE Transactions on Biomedical Engineering, 47(9):1185–1194, 2000. doi:10.1109/10.867928.
Ary L. Goldberger, Luis A. N. Amaral, Leon Glass, Jeffrey M. Hausdorff, Plamen Ch. Ivanov, Roger G. Mark, Joseph E. Mietus, George B. Moody, Chung-Kang Peng, and H. Eugene Stanley. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation, 2000. doi:10.1161/01.CIR.101.23.e215.
https://mne.tools/stable/auto_tutorials/clinical/60_sleep.html#id1
小伙伴们点个“在看”,加
(星标)关注茗创科技,将第一时间收到精彩内容推送哦~


IP属地:湖南1楼2022-10-21 15:42回复