IS593: Machine Learning Application Trends in Information Security
The course is a paper reading class. The class will cover a list of papers published at the prestigious security conferences. Each student will present assigned papers and lead discussions. The goal of the course is to understand the trends in applying machine learning algorithms on computer security problems as well as to seek an in-depth understanding of the covered research papers.
Basic Information
- Lecture: Friday 9:00 AM - 11:45 AM
- Instructor: Sooel Son
- Email: sl.son (at) kaist.ac.kr
- Homepage: https://sites.google.com/site/ssonkaist/
- Lecture room: E3-1 3445
- T.A.:
- Dongwon Shin: godeastone (at) kaist.ac.kr
- Kiwon Chung: greenare (at) kaist.ac.kr
Evaluation
- Attendance & Class participation: 15%
- Paper critiques: 15%
- Paper Presentation #1: 10%
- Paper Presentation #2: 10%
- Paper Presentation #3: 10%
- Project proposal & Midpoint evaluation : 10%
- Final project: 30%
Schedule
The following schedule is subject to change.
1st week
- 2/28: [Course Introduction] [Zoom at 5:00 PM]
2nd week
- 3/8:
- Xu et al. Neural Network-based Graph Embedding for Cross-Platform Binary Code Similarity Detection. (CCS 2017) [Sooel Son]
- Ye et al. Yet Another Text Captcha Solver: A Generative Adversarial Network Based Approach. (CCS 2018) [Dongwon Shin]
3rd week (Evasion Attacks)
- 3/15:
- Carlini et al. Towards Evaluating the Robustness of Neural Networks. (S&P 2017) [Yoonha Bahng]
- Croce et al. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. (ICML 2020) [Junkyu Kang]
4th week
- 3/22:
- Jin et al. Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment. (AAAI 2020) [Yoonha Bahng]
- Jones et al. Automatically Auditing Large Language Models via Discrete Optimization. (ICML 2023) [Doohyun Kim]
5th week
- 3/29:
- Madry et al. Towards Deep Learning Models Resistant to Adversarial. (ICLR 2018) [Sooel Son]
- Florian Tramèr. Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them (ICML 2022) [Junkyu Kang]
6th week (Data Poisoning Attacks)
- 4/5:
- Suciu et al. When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks. (USENIX Security 2018) [Junhak Lee]
- Pang et al. A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models. (CCS 2020) [Sooel Son]
7th week
- 4/12:
- Wang et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks. (S&P 2022) [Doohyun Kim]
- Jia et al. BadEncoder: Backdoor Attacks to Pretrained Encoders in Self-Supervised Learning. (S&P 2022) [Sooel Son]
8th week
- 4/19: [No class] Midterm season
9th week (Membership Inference)
- 4/26:
- Shokri et al. Membership Inference Attacks Against Machine Learning Models. (S&P 2017) [Shushayev Arslan]
- Jagielski et al. Students Parrot Their Teachers: Membership Inference on Model Distillation (NeurIPS 2023) [Junkyu Kang]
10th week (Membership Inference)
- 5/3:
- Carlini et al. Membership Inference Attacks From First Principles (S&P 2022) [Dongwon Shin]
- Carlini et al. Extracting Training Data from Large Language Models. (USENIX Security 2021) [Doohyun Kim]
11th week
- 5/10: Midterm presentation
- Tramèr et al. Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. (CCS 2022) [Sooel Son]
12th week (Model Inversion)
- 5/17:
- Zhang et al. Generative model-inversion attacks against deep neural networks. (CVPR 2020) [Junhak Lee]
- An et al. Mirror: Model inversion for deep learning network with high fidelity. (NDSS 2021) [Mahammad Yusifov]
13th week
- 5/24:
- Wang et al. With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning. (USENIX Security 2018) [Junhak Lee]
- Carlini et al. Extracting Training Data from Diffusion Models. (USENIX Security 2023) [Mahammad Yusifov]
- Nasr et al. Scalable Extraction of Training Data from (Production) Language Models (Arxiv) [Yoonha Bahng]
14th week
- 5/31:
- No Class
15th week
- 6/7: Final Project Presentation
16th week
- 6/14: Final report