IS593: Machine Learning Application Trends in Information Security
The course is a paper reading class. The class will cover a list of papers published at the prestigious security conferences. Each student will present assigned papers and lead discussions. The goal of the course is to understand the trends in applying machine learning algorithms on computer security problems as well as to seek an in-depth understanding of the covered research papers.
Basic Information
- Lecture: Monday/Wednesday 10:30 AM - 11:45 AM
- Instructor: Sooel Son
- Email: sl.son (at) kaist.ac.kr
- Homepage: https://sites.google.com/site/ssonkaist/
- Lecture room: E3-1 3445
Evaluation
- Attendance & Class participation: 15%
- Paper critiques: 15%
- Paper Presentation #1: 10%
- Paper Presentation #2: 10%
- Paper Presentation #3: 10%
- Project proposal & Midpoint evaluation : 10%
- Final project: 30%
Schedule
The following schedule is subject to change.
1st week
- 2/27: [Course Introduction]
- 3/1: [No Class]
2nd week
- 3/6: Xu et al. Neural Network-based Graph Embedding for Cross-Platform Binary Code Similarity Detection. (CCS 2017) [Sooel Son]
- 3/8: Ye et al. Yet Another Text Captcha Solver: A Generative Adversarial Network Based Approach. (CCS 2018) [Dongwon Shin]
3rd week
- 3/13: Tial et al. DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars. (ICSE 2018)
- 3/15: Pei et al. DeepXplore: Automated Whitebox Testing of Deep Learning Systems. (SOSP 2017)
4th week (Evasion Attacks)
- 3/20: Calini et al. Towards Evaluating the Robustness of Neural Networks. (S&P 2017)
- 3/22: Croce et al. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. (ICML 2020)
5th week
- 3/27: Meng et al. MagNet: a Two-Pronged Defense against Adversarial Examples. (CCS 2017)
- 3/29: Madry et al. Towards Deep Learning Models Resistant to Adversarial. (ICLR 2018)
6th week (Data Poisoning Attacks)
- 4/3: Suciu et al. When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks. (USENIX Security 2018)
- 4/5: Pang et al. A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models. (CCS 2020)
7th week
- 4/10: Wang et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks. (S&P 2022)
- 4/12: Jia et al. BadEncoder: Backdoor Attacks to Pretrained Encoders in Self-Supervised Learning. (S&P 2022)
8th week
- 4/17: [No class] Mideterm season
- 4/19: [No class] Mideterm season
9th week (Membership Inference)
- 4/24: Shokri et al. Membership Inference Attacks Against Machine Learning Models. (S&P 2017)
- 4/26: Hui et al. Practical Blind Membership Inference Attack via Differential Comparisons. (NDSS 2021)
10th week
- 5/1: Midterm presentation
- 5/3: Midterm presentation
11th week (Model Inversion)
- 5/8: Melis et al. Exploiting unintended feature leakage in collaborative learning. (S&P 2019)
- 5/10: Zhang et al. Generative model-inversion attacks against deep neural networks. (CVPR 2020)
12th week
- 5/15: An et al. Mirror: Model inversion for deep learning network with high fidelity. (NDSS 2021)
- 5/17: Tramèr et al. Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. (CCS 2022)
13th week
- 5/22: Wang et al. With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning. (USENIX Security 2018)
- 5/24: No Class
14th week (Watermarking)
- 5/29: No Class (Buddha’s Birthday)
- 5/31: Prokos et al. Squint Hard Enough: Attacking Perceptual Hashing with Adversarial Machine Learning. (USENIX Security 2023)
15th week
- 6/5: Carlini et al. Extracting Training Data from Large Language Models. (USENIX Security 2021)
- 6/7: Project presentation (1)
16th week
- 6/13~15: Final season