Abstract [eng] |
People spend about one third of their lives sleeping and dreaming. Despite the impact of dreams on our emotions and memory, dreams are often forgotten. Motor imagery is one of the major components reported to be present in dreams, along with other sensory, perceptual, and cognitive phenomena. In our pursuit of developing an objective dream-content recording methodology, we focused on the motor imagery-related dream component. The recording of brain electromagnetic activity during sleep and the decoding of it with a machine learning (ML) model into dream components can be performed to collect detailed dream reports. However, the model needs training data, and dream contents cannot be controlled and are hidden from external observers. It has been shown that brain activation during dreamed actions corresponds to the brain activation for the same actions in a wakeful state, so training data can be collected from awake subjects. Electrocorticography (ECoG) data is rare and not generalized between subjects, so using it is problematic, as deep ML models are prone to overfit on little amounts of data. A way to generalize ECoG data to train a model on several subjects' data simultaneously is proposed in this work. A pipeline is used with ECoG data to develop a classifier that discriminates between hand and tongue movements in different patients. An emphasis is made on Convolutional Neural Network (CNN) models. A hypothesis is tested on whether a motor imagery classifier can be trained on real motor data, as such data is easier to collect. By training models on different data sets, it is shown that motor activity data is more easily discriminated compared to motor imagery. Spectral power features were shown to be more informative compared to temporal features. Finally, the model is used to predict motor imagery during Rapid Eye Movement (REM) sleep. |