Please use this identifier to cite or link to this item:
http://theses.ncl.ac.uk/jspui/handle/10443/5934
Title: | Towards automated sleep stage assessment using ubiquitous computing technologies |
Authors: | Zhai, Bing |
Issue Date: | 2023 |
Publisher: | Newcastle University |
Abstract: | The growing popularity of ubiquitous computing devices, such as smartphones, wristbands and smartwatches, has caused an increase in the scale of collecting physiological and psychological data on their growing number of users. This availability of digital health data outside the normal confines of hospitals and other institutions provides a fundamental opportunity for researchers to infer individual behaviour and health at the scale. An application of ubiquitous computing for digital health is the development of automated systems that are robust enough to monitor sleep stages noninvasively outside the sleep laboratory. However, turning the data into actionable insights requires computational methods that can infer sleep stages from physiological time-series data related to parts of the brain’s activity. This thesis describes the novel deep-learning methods that leverage wearable sensing data for non-invasive sleep stage monitoring in large-scale populations. Firstly, the study performs a systematic evaluation of the sleep stage classification based on traditional machine learning models and neural networks using actigraphy and cardiac sensing data. The proposed deep ensemble model outperforms traditional algorithms and the deep learning baselines. However, the performance of the automated sleep stage monitoring algorithm can be affected by personal attributes such as age, BMI and sleep disorders, etc. Therefore, this work proposes a novel network based on the variational autoencoder, which can disentangle the feature space into personal attribute-specific features that are irrelevant to sleep stage classification and personal attribute-free features that only contain the sleep stage-relevant information. The proposed network can effectively reduce the effects of personal attributes on model performance. Finally, multimodal fusion strategies and methods are systematically investigated. The proposed fusion methods can significantly improve the performance of three-stage sleep classification on a large clinical sleep study dataset. The proposed methods have also experimented on a small sleep dataset collected from consumer-grade wearables. The empirical results demonstrate that wearable sensors can classify three stages of sleep with 78 % accuracy. These proposed methods generate robust predictions and may be used for long-term free-living sleep stage monitoring. |
Description: | PhD Thesis |
URI: | http://hdl.handle.net/10443/5934 |
Appears in Collections: | School of Computing |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Zhai B 2023.pdf | 9.6 MB | Adobe PDF | View/Open | |
dspacelicence.pdf | 43.82 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.