Topic outline

  • Guidelines

    Leading a discussion on a paper is composed of 2 parts taking 50 minutes altogether.

    1. A presentation type power point composed of the following items (20 minutes):
    1.a. An objective paper presentation that contains for instance:
    • Problem statement
    • Adversary/threat model
    • Summary of main findings & contributions
    • Results
    1.b. A critical personal synthesis that contains for instance:
    • Analysis of correctness/completeness
    • Potential flaws
    • Relation to related work
    • (A support for following discussion)
    • Etc.

    2. An interactive discussion with the rest of the class (30 minutes)
    • Prepare a set of points to discuss
    • Make it interactive and raise issues where opinions are likely to be divided
    • Develop provocative opinions
    • Ask controversial questions
    • Correlate research with recent events (e.g., news headlines on the use of AI)

    Paper assignment

    Go to this Google form and select 5 papers that you would like to present before Monday March 08, 23:55

    Presentation assignment:
    Discussion session
    Title Presenter 
    1. Model evasion
    Devil’s Whisper: A General Approach for Physical...
    Albert Mohwald
      On Adaptive Attacks to Adversarial Example Defenses
    Oliver Jarnefelt
    2. Model poisoning Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
    Seb

    Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
    Yujia Guo
    3. Compromised training
    BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
    Paavo Reinikka
      Machine Learning Models that Remember Too Much
    Ananth Mahadevan
    4. Model stealing
    High Accuracy and High Fidelity Extraction of Neural Networks
    Albert Mohwald

    Imitation Attacks and Defenses for Black-box Machine Translation Systems
    Yujia Guo
    5. Protecting intellectual property of models
    Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
    Buse

    DAWN: Dynamic Adversarial Watermarking of Neural Networks
    Samuel
    6. Training data leakage
    The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
    Samuel
      ML-Leaks: Model and Data Independent Membership Inference Attacks...

    6. Tracing training data
    Radioactive data: tracing through training
    Oliver Jarnefelt

    Auditing Data Provenance in Text-Generation Models

    7. Privacy-preserving training
    Learning Differentially Private Recurrent Language Models


    Auditing Differentially Private Machine Learning: How Private is Private SGD?
    Paavo Reinikka
    7. Fairness & bias in ML prediction
    Characterising Bias in Compressed Models
    Ananth Mahadevan

    On the Privacy Risks of Algorithmic Fairness