Added on augmentation info
This commit is contained in:
@ -35,3 +35,21 @@ Adaptive training - train on all subjects and half data of one subject. Test on
|
||||
Down-sampled to 100 Hz. Only testing generalizability, using the adaptive training with and without augmented data.
|
||||
|
||||
|
||||
### [[Augmentation methods#Augmenting The Size of EEG datasets Using Generative Adversarial Networks (2018)]]
|
||||
* Evaluation using 5 fold cross-validation on dataset PhysioNet against AutoEncoders and VAE. Using metric reconstruction error.
|
||||
* Assesing impact of RGAN with different classification models. Evaluating classifcation accuracy on deep feed-forward NN, SVM, random forest tree.
|
||||
|
||||
## [[Augmentation methods#Data augmentation strategies for EEG-based motor imagery decoding (2022)]]
|
||||
|
||||
Used datasets:
|
||||
* https://academic.oup.com/gigascience/article/6/7/gix034/3796323
|
||||
* https://www.nature.com/articles/sdata2018211
|
||||
For now I don't know where to get raw data of those datasets
|
||||
Data processing:
|
||||
* Bandpass filter 1-40 Hz
|
||||
* Baseline correction was performed with the first 200ms pre-cue. Subtract average of eeg signal before the cue
|
||||
* Artifact correction, oculograph and myograph. Slightly different parameters for each dataset
|
||||
* Data re-referencing to average to improve the signal-to-noise ratio. The signal at each channel is re-referenced to the average signal across all electrodes.
|
||||
* Used [[Papers#Autoreject Automated artifact rejection for MEG and EEG data]]
|
||||
Dataset was split in ratio 70:12:18 between train:validation:test.
|
||||
|
||||
|
Reference in New Issue
Block a user