diff --git a/Papers.md b/Papers.md index 88af585..7220dba 100644 --- a/Papers.md +++ b/Papers.md @@ -115,6 +115,55 @@ https://pmc.ncbi.nlm.nih.gov/articles/PMC4045570/ Review of methods used to extract features from EEG data. Algorithmic methods, not much of neural is described. +## Usage of variational autoencoders +### Key Studies on VAEs and AEs in EEG Feature Extraction + +1. **VAEEG: Variational Auto-Encoder for EEG Representation** + + - **Overview**: Introduces a self-supervised VAE model, VAEEG, designed to extract concise and informative representations from EEG data across separate frequency bands. + + - **Applications**: Demonstrated effectiveness in clinical tasks such as pediatric brain development assessment, epileptic seizure detection, and sleep stage classification. + + - **Findings**: VAEEG achieved superior reconstruction performance and enhanced downstream classification tasks, indicating its potential as a robust feature extractor for EEG signals. [sciencedirect.com+1pubmed.ncbi.nlm.nih.gov+1](https://www.sciencedirect.com/science/article/pii/S1053811924004439?utm_source=chatgpt.com) + +2. **hvEEGNet: Hierarchical VAE for EEG Data** + + - **Overview**: Proposes two VAE models, vEEGNet-ver3 and hvEEGNet, incorporating EEGNet-based encoders and a dynamic time warping loss function. + + - **Applications**: Tested on the BCI Competition IV Dataset 2a, focusing on high-fidelity EEG reconstruction. + + - **Findings**: hvEEGNet outperformed previous solutions in reconstructing EEG data, suggesting its utility in anomaly detection and as a feature extractor for classification tasks. [imiens.org+4arxiv.org+4pubmed.ncbi.nlm.nih.gov+4](https://arxiv.org/abs/2312.00799?utm_source=chatgpt.com) + +3. **EEG2Vec: Learning Affective EEG Representations via VAEs** + + - **Overview**: Develops a conditional VAE framework, EEG2Vec, to learn generative-discriminative representations from EEG data. + + - **Applications**: Focused on emotion recognition, achieving robust classification performance and the ability to generate synthetic EEG data resembling real inputs. + + - **Findings**: Demonstrated the model's suitability for unsupervised EEG modeling and potential in generating artificial training data. [arxiv.org](https://arxiv.org/abs/2207.08002?utm_source=chatgpt.com) + +4. **CNN-VAE Framework for Motor Imagery Classification** + + - **Overview**: Combines Convolutional Neural Networks (CNNs) with VAEs to classify motor imagery EEG signals. + + - **Applications**: Applied to the BCI Competition IV Dataset 2b, focusing on motor imagery tasks. + + - **Findings**: The CNN-VAE framework outperformed existing methods, indicating the effectiveness of integrating CNNs with VAEs for feature extraction and classification in EEG-based BCIs. [imiens.org+5ouci.dntb.gov.ua+5pubmed.ncbi.nlm.nih.gov+5](https://ouci.dntb.gov.ua/en/works/lDXm8RZl/?utm_source=chatgpt.com)[ouci.dntb.gov.ua+5link.springer.com+5pubmed.ncbi.nlm.nih.gov+5](https://link.springer.com/article/10.1007/s11042-024-19850-0?utm_source=chatgpt.com)[pmc.ncbi.nlm.nih.gov](https://pmc.ncbi.nlm.nih.gov/articles/PMC6387242/?utm_source=chatgpt.com) + +5. **Unsupervised Feature Extraction with Autoencoders for Multiclass Motor Imagery BCI** + + - **Overview**: Utilizes autoencoders for unsupervised feature extraction in multiclass motor imagery EEG classification. + + - **Applications**: Focused on enhancing classification performance in BCI systems. + + - **Findings**: The approach improved classification accuracy, demonstrating the potential of autoencoders in extracting meaningful features from EEG data. [ouci.dntb.gov.ua](https://ouci.dntb.gov.ua/en/works/lDXm8RZl/?utm_source=chatgpt.com)[pubmed.ncbi.nlm.nih.gov+1ouci.dntb.gov.ua+1](https://pubmed.ncbi.nlm.nih.gov/32982703/?utm_source=chatgpt.com) + +Variations of VAE: +* CVAE - conditional vae. Class specific image generation +* Beta-vae - Tunable parameter to control tradeof between reconstruction quality and disentanglement +* VQ-vae - provide discrete latent space for sharper reconstructions + + TODO: - find augmentation methods worth of trying - find example architecture for classification BCI diff --git a/TODOs.md b/TODOs.md new file mode 100644 index 0000000..fd5a513 --- /dev/null +++ b/TODOs.md @@ -0,0 +1,32 @@ +**Create thesis on overleaf** +Take template for masters thesis and start working on it. Just start writing, also it will be good place to just write what has been explored + +**Set goals** +Main focus of this is to create some sort inter people feature extraction with some degree of semantic meaning of embedings. To achieve this we need to: +* Find suitable datasets +* What will be input +* Preprocess datasets +* Introduce controlled artifical noise source +* Develop system to extract features +* Train SOA classificator on those embeddings +* Evaluate + +**Find suitable dataset** +Dataset should be of type motor imaginery, tho it is not rule. +Maybe dataset BCI competition IV-2a can be good start, can be found [here](https://www.bbci.de/competition/iv/desc_2a.pdf) + +**What will be input** +Should we just showve batches of raw data to network. Or calculate spectrogram. How big should be the batches, how much overlapping, use sliding window? + +**Preprocessing datasets** +Dataset can be corrupted with strong signals coming from eye movement, neck muscles activity and so on. Find way to filter it out. Use state of the art technique on this. Not trying to really explore this part, just use whats out there. + +**Artifically adding noise** +Find some way to artifically add noise to dataset, in a way to improve BCI classification. Find some methods and try them on some known BCI classifier architectures. + +**System to extract features** +This is the main part. Find NN architecture which would be able to extract features. Try to explore more architectures. What I think would be good to try: Autoencoder, GAN (maybe with vector quantization), transformer, maybe use contrastive learning. Test on raw and noisied data + +**Evaluate** +Find methods to evaluate quality of extracted embeddings. Maybe train an classificator on those and see quality +